Mason
Mason
Comments
-
Lol absolutely right. The quanta was bearable, but the SM was a damn wind tunnel haha. Primary motivator to move everything out of the house :)
-
Heh, well my shed is disconnected from the house a good 25-30 feet from any structure (other than a fence), so burn baby burn I guess. Used to have the same setup in my basement at an old house and never really worried. Only component I'd be worried about is the AC to DC converter, but I currently have that server (the…
-
Here my home rack (of sorts) out in the shed - Just a few servers (a quanta 1U dual node box and a 2U supermicro storage box) and handful of Pi's connected via WiFi bridge (it's surprisingly pretty stable).
-
Sounds like the wait timer should be doubled to 20s to wait for the scores to post on the geekbech website. Thanks for all the reports, guys! :)
-
Can confirm. Something we do pretty frequently when there's lots of replies on a certain subject. We'll set the OP to whomever initiated the discussion (Discourse makes that quite easy). If it's something we feel should be public to make others aware, then we'll first reach out to make sure the OP is cool with the…
-
A couple possible explanations: * Your machine had issues reaching the geekbench website to retrieve the scores. This is less likely since your machine was able to download the geekbench files and upload the result. * The geekbench website failed to post your results to the webpage by the time the sleep timer ended…
-
Shouldn't be affected by caching. Random read/writes and direct I/O flag on. It uses fio. Check it out, it's pretty neat stuff
-
The read and write tests are being done concurrently, not independently. Both are added together to get the full IOPS load the disk was under during the test. Also, this test isn't really designed to show the absolutely maximum performance, but rather performance under somewhat real-world conditions.
-
Looks like zfs' built-in readlimit & sendlimit options are what you're looking for. I don't use zfs myself, but here's the commands: zfs set writelimit=10mb rpool/my/poolzfs set readlimit=10mb rpool/my/pool Just set it to something low so that both of the limits combined won't eat up all your bandwich, i.e.…
-
Sounds like a good idea to me! Would make some interesting graphs if you plot the read, write, and cumulative speed results independent of each other on the same graph.
-
Could be the case! It probably all boils down to what is actually being represented in the test. Are we copying a large video file or making many database updates to many distinct tables, rows? I'd imagine the fio parameters would vary widely in trying to represent those two cases. (just some food for thought :)) Edit:…
-
That's a negative for me, sir. Have more compute power than I have anything to do with at the moment. Thanks for the offer!
-
If given the option, I'd opt for 512k. If you look through the YABS results thread, nearly every single test does better (in bandwidth terms) with bs=512k over bs=4k. The low block sizes are really just to stress test how many IOPS you can get out of the machine. From the light reading I did when researching this problem…
-
Good write up, Ant! Just a quick thought I had -- for your extended fio tests, you're limiting the block size to just 64k, when in reality the block size will likely be higher as the system tries to use what is most optimal. The yabs test shows the disk performs very well for the higher block sizes (512k or 1m), which is…