Small Infrastructure Server

edited April 2020 in General

I'm in the market for a small infrastructure server. Something with in the home NAS form factor that I can install my own OS on.

Candidates:

Specs:

  • 4x+ real cores (preferably without SMT)
  • 8-16GB RAM (ECC preferred)
  • 6x disks

    • 2x boot
    • 4x storage (preferably in an external hotswap cage, which is what I really want)
  • 2x+ USB 3.1 or up (for backing up the backups)

  • IPMI/BMC/Remote management (this would be a plus)

Uses:

  • Backups (mostly)
  • Tertiary DNS
  • Internal monitoring
  • Admin tools (Ansible and such)
  • Jump server so I only have to log into one thing when remote

This is something I've been looking for a while, and I've never really been able to find something that checks all of the boxes. Everything ends up being in the mini-tower form factor.

I thought I would see if anyone had any ideas.

Small Infrastructure Server
  1. Best Option:9 votes
    1. Tacos, Lengua
      22.22%
    2. Potatoe
        0.00%
    3. Potato
        0.00%
    4. The end is nigh
      44.44%
    5. Amitz
      33.33%
    6. Maybe
        0.00%
    7. No
        0.00%
    8. Option #18
        0.00%
    9. Other
        0.00%
    10. Dell
        0.00%

Comments

  • WSSWSS Retired

    Nothing will be quite as simple as just buying a NAS which already has third party firmware available. This is going to waste a lot of time, create a lot of heat, and be a heartache to live with. I'd check to see what people are using with OpenNAS/FreeNAS.

    My pronouns are asshole/asshole/asshole. I will give you the same courtesy.

  • @WSS said:
    Nothing will be quite as simple as just buying a NAS which already has third party firmware available.

    I have a QNAP NAS at home. Well "at home" since I'm WFH right now. :) It's okay. It's not anything I would take to work, for various reasons. "In the end, I'd rather just find something I can install a base OS on." is the last line of my stories involving custom built appliances/firmwares.

    The Kobol is the closest thing to meeting all the requirements, and I have one ordered for personal use. ARM and only supported by Ubuntu right now aren't a good combo for work. If it gets picked up by a few more distros/OSes, it would be a really nice package.

    This is going to waste a lot of time, create a lot of heat, and be a heartache to live with.

    Yes, exactly like everything else I get paid to deal with, plus somethings I don't.

    This is a lot less complicated then your thinking. Nothing is going to connect to it. It mainly needs to run Urbackup and be external to everything else. Think a base Toyota Corolla instead of a diesel Chevy 2500 HD.

    Thanked by (1)WSS
  • When I wanted something like this I just went to local distributor, bought all the components, connected the wires and put it on a shelf. The thing wasn't even a "box" because it didn't even have a case, just components lying on a shelf. It was not as cheap as I had hoped and I also found out that harddrives working 24/7 will overheat... duh. And harddrives can be the noisiest part of the system. And L2ARC (ssd caching) doesn't work the way I thought it works.

    You might think I'm sharing my expirience as a warning not to follow in my footsteps. Nope. The next time I was in the market for something like this, I found out that prebuilt appliances much more often than not cut even more corners than I did... And it seems criminal fraction of the cost goes to looknfeel.

    Sadly I cannot recommend an alternative that would not demand a lot of time in research. Having already spent quite some time on research I personally would assemble it myself again, without the mistakes... Because now it works quite good, and I really really really enjoy the flexibility of general purpose OS, but it was a journey.

  • edited April 2020

    @comi said:
    When I wanted something like this I just went to local distributor, bought all the components, connected the wires and put it on a shelf. The thing wasn't even a "box" because it didn't even have a case, just components lying on a shelf. It was not as cheap as I had hoped and I also found out that harddrives working 24/7 will overheat... duh. And harddrives can be the noisiest part of the system.

    I inherited a work shelf full of stuff in bench cases, which are really bad for air flow unless the closet happens to double as a wind tunnel. Getting everything into a more appropriate setup is a long term goal.

    I made the mistake of putting some refurbished 7200rpm WD RE drives from a Black Friday sale in my QNAP at first, and I quickly replaced them when some 5400rpm WD Reds went on sale. :) It wouldn't have been bad if it wasn't sitting on a open shelf 5' from me.

    And L2ARC (ssd caching) doesn't work the way I thought it works.

    How so?

    You might think I'm sharing my expirience as a warning not to follow in my footsteps. Nope.

    I'm not scared of it. :) I've built several servers like this, and when Samba and AD aren't in the picture, it's a really easy.

    Sadly I cannot recommend an alternative that would not demand a lot of time in research. Having already spent quite some time on research I personally would assemble it myself again, without the mistakes... Because now it works quite good, and I really really really enjoy the flexibility of general purpose OS, but it was a journey.

    The case is the hardest part. :disappointed: There are some small cases with external hot swap, but they're expensive for what they are or 4u.

    I'm mainly trying to shortcut the problem of dealing with bum parts and incompatibilities. I just want it to show up and be ready for an OS. This isn't a market anyone really serves. :/

    Probably because there is like two people who want something like this, and everyone else wants the Internet points of having a server rack at their house. :lol:

  • WSSWSS Retired

    @FlamingSpaceJunk said:
    Probably because there is like two people who want something like this, and everyone else wants the Internet points of having a server rack at their house. :lol:

    As cheap and usable as SSDs are these days, most people don't really give a flying fuck about a half dozen plates of flying rust at high speeds. Get a NetApp filer. :disappointed_relieved:

    My pronouns are asshole/asshole/asshole. I will give you the same courtesy.

  • @WSS said:
    As cheap and usable as SSDs are these days, most people don't really give a flying fuck about a half dozen plates of flying rust at high speeds. Get a NetApp filer. :disappointed_relieved:

    Most people also pay for things like S3 too.

    I only need 4x disks for RAID 10, and 4TB SSDs are still prohibitively expensive. Besides, all the speed would be lost on the 1G network. This really is a low end server.

    Purestorage. Their stuff is amazing. :astonished: Way out of my league at the moment though when I'm looking to buy the server equivalent of a used Subaru Forester without too many dings, a manual, and the 2.5L NA engine.

  • WSSWSS Retired

    You don't want the 2.5NA, unless someone's already replaced the headgasket.

    Pretty much the same thing when it comes to NAS. It's gonna fucking blow up and piss oil everywhere.

    My pronouns are asshole/asshole/asshole. I will give you the same courtesy.

  • @WSS said:
    You don't want the 2.5NA, unless someone's already replaced the headgasket.

    Yep. It costs ~$4K for an engine rebuild when the head gets cracked. The engine made it to 160K after starting at 32K before it finally blew up.

    I thought you'd get the reference. :)

    Pretty much the same thing when it comes to NAS. It's gonna fucking blow up and piss oil everywhere.

    Probably. This isn't a forever solution; I just needs to last 2 years.

    It's also just a low power server to run backup software plus some small services to maintain redundancy while the main servers are in maintenance, and not a NAS device. There are already 2 file servers, which will get replaced with serious hardware when the time comes. This one? Low requirements and replaceable. The previous solution was an RPi with a USB disk and rsync scripts, for reference.

    With a budget of $20K and rack space, I could do some cool stuff. I don't have either of those right now, so this is what it is.

    Thanked by (1)WSS
  • comicomi OG
    edited April 2020

    @FlamingSpaceJunk said:

    And L2ARC (ssd caching) doesn't work the way I thought it works.

    How so?

    Well at the time I had stupid assumption that it's just like RAM but slower and cheaper, L2ARC must be an extension of ARC, right? right?
    But it simply isn't, good thing I had enough RAM already and the ssd I wanted to use for it was a leftover from some other thing.

    I don't know what you know, but L2ARC only becomes useful when you have like hundreds of terabytes, and there is simply not enough RAM slots to cache it, so you violate the 1GB/1TB guidline and kinda sorta compensate with ssd. Otherwise the fact that L2ARC eats RAM means the thing is going to be slower than it would be without it.

    @FlamingSpaceJunk said: I made the mistake of putting some refurbished 7200rpm WD RE drives from a Black Friday sale in my QNAP at first, and I quickly replaced them when some 5400rpm WD Reds went on sale. :) It wouldn't have been bad if it wasn't sitting on a open shelf 5' from me.

    Get some real spinning iron, man =D
    https://www.backblaze.com/blog/hard-drive-stats-for-2018/

    @FlamingSpaceJunk said:
    2x boot
    4x storage

    Separate boot drive is better of course, but I personally find that FreeBSD's root-on-zfs works great, so in theory you could skip on that.

  • @comi said:

    @FlamingSpaceJunk said:

    And L2ARC (ssd caching) doesn't work the way I thought it works.

    How so?

    Well at the time I had stupid assumption that it's just like RAM but slower and cheaper, L2ARC must be an extension of ARC, right? right?
    But it simply isn't, good thing I had enough RAM already and the ssd I wanted to use for it was a leftover from some other thing.

    I don't know what you know, but L2ARC only becomes useful when you have like hundreds of terabytes, and there is simply not enough RAM slots to cache it, so you violate the 1GB/1TB guidline and kinda sorta compensate with ssd. Otherwise the fact that L2ARC eats RAM means the thing is going to be slower than it would be without it.

    It's been a while since I read up on ZFS, and I've never gotten to run it in production. Hardware RAID plus LVM works well for me or md plus LVM when the drive count is low.

    The FreeBSD stuff I do deploy into production is so small UFS makes more sense. 1-2x proc, 0.5-1G RAM, and 10G disk.

    @FlamingSpaceJunk said: I made the mistake of putting some refurbished 7200rpm WD RE drives from a Black Friday sale in my QNAP at first, and I quickly replaced them when some 5400rpm WD Reds went on sale. :) It wouldn't have been bad if it wasn't sitting on a open shelf 5' from me.

    Get some real spinning iron, man =D
    https://www.backblaze.com/blog/hard-drive-stats-for-2018/

    It's not an important piece of infrastructure. :smile: I mainly bought it to be a DLNA server which I thought would be better integrated and less flaky then whatever I rolled, but it's not. It's stupidly flaky and convoluted to configure, so it's not even powered on at the moment.

    @FlamingSpaceJunk said:
    2x boot
    4x storage

    Separate boot drive is better of course, but I personally find that FreeBSD's root-on-zfs works great, so in theory you could skip on that.

    Probably. :smile:

    That's really a leftover from when I was working with VMware servers, and the VMware install image didn't have builtin support for certain Areca RAID cards. Spending some money to hang boot drives off the onboard SATA instead of creating a custom install image was worth it.

  • FreeNAS can be installed to a USB drive; most of the OS is loaded into ramdisk, so it doesn't hammer the USB drive.

    If using hardware RAID, make sure you're able to procure an identical replacement card (and flash to same firmware) if/when your RAID card dies. I prefer flashing to IT mode (JBOD) and using software, e.g., zfs, btrfs, snapraid+mergerfs, or Unraid.

    Yes, the standard ZFS advice is to max out RAM first before adding L2ARC, otherwise it may actually hurt performance. 1GB per 1TB is just a guideline; ZFS ARC will happily take as much RAM as you give it (and that's a good thing).

    Your requirements are pretty light; have you considered, e.g., an E3v1/v2 (LGA1155) server board, e.g. SuperMicro X9SCM-F? 2x SATA3 for a mirrored SSD zpool, 4x SATA2 for a zpool of HDD mirrors, CPUs are dirt cheap. 4x8GB DDR3 ECC UDIMM. IPMI in the -F boards. Case can be a cheap mini tower or a SuperMicro 2U, with SQ PSUs if noise is an issue. Prebuilts of that gen are also another option, e.g., Dell T110ii.

    For spinners: WD shucks, e.g., 12TB for $180. They're essentially HGST 5400rpm drives.

  • I have had good experiences with the HP Microservers personally, but I think all off those systems are great options. Ultimately if you want the most flexibility I'd go with either the HP or Supermicro system.

    Cheap dedis are my drug, and I'm too far gone to turn back.

Sign In or Register to comment.

This Site is currently in maintenance mode.
Please check back here later.

→ Site Settings