My NAS Server 2 (Part III) – Hardware

As promised, here’s my initial hardware for this NAS. Later posts will discuss how I went further and what hardware needed to change to accommodate.

Motherboard

A tough choice I had to make right at the beginning of the build list was the motherboard. I really wanted something with at least 8 SATA ports, two PCIe x 4 slots and integrated graphics. The two PCIe slots with 8-port SATA cards, plus the 8 on-board ports would have given me 24 ports total, enough for 2 x 12 disk arrays. Not bad.

It turns out that 8 SATA ports is very rare, and the sort motherboard that has lots of PCIe slots doesn’t have integrated graphics. It wasn’t easy to make a short-list since there’s no site (that I could find) that compares all these features. Even manufacturers website don’t list the features side-by-side. I ended up just ploughing through the sites and noting down which ones were possibilities. Of course, cost played a big factor in my choice.

This is the motherboard I ended up with: Gigabyte GA-MA78G-DS3H. Bear in mind this was 18 months ago so it is probably a lot easier to find an appropriate choice now.

Features

  • Integrated graphics with VGA. VGA is important to me (as is PS/2) because it allows me to run it through a KVM switch with the rest of my servers. This motherboard also happens to have HDMI – talk about overkill for a headless server running a CLI.
  • Gigabit LAN. I expected all current model to have Gigabit LAN so didn’t worry about this when searching. Gigabit LAN is a must for a NAS, but dual-ports doesn’t help. It also happens to be on a PCI Express bus, so won’t clog up the internal bandwidth available like My NAS Server 1 did.
  • Three PCIe x 1 slots, two x16 length slots (electrically – one x16 + one x4) and 2 PCI slots. I only intended to use the x16 slots, and while the one that is actually x16 electrically is designed for a discrete graphics card, it seems to work fine with a SATA controller. That something you should verify before buying, as some motherboards have booting problems if there’s a non-graphics card in that slot.
  • Six SATA ports. Unfortunately I had to trade off the SATA ports for cost. There were a few that had 8 SATA ports, but not decent PCIe slots, and some that had both of those, but not integrated graphics. I probably could have got what I wanted for ~4x what I paid (~£70).
  • One IDE connector. I needed this for the boot disk using an IDE to Compact Flash adaptor.

Processor

I guess the reason I chose AMD over Intel was the cost. There were a couple of options for both, and I’m happy either way, so really, it just came down to the price. AMD systems (in my experience) tend to be cheaper. There’s no` need for a really fast processor in a dedicated NAS, as there will be other bottlenecks (the LAN connection). Quad-core doesn’t help as Linux-RAID isn’t multi-threaded, but dual-core has it’s uses because the second core can deal with network protocols and other overheads that go with filing.

That made it an easy choice – dual-core and the best value in terms of megahertz. I wasn’t bothered by power-consumption so ended up with a 2.6GHz AMD Athlon 64 X2 5000+. 64-bit was essential due the XFS volume size limit mentioned in My NAS Server 1, but pretty much all CPUs are 64-bit now, anyway.

Memory

RAM is pretty cheap, so I went for 4GB of bog-standard stuff. You don’t need much RAM to run a NAS (Linux), but having a bunch spare allows me to put some temporary filesystems in RAM which reduces wear on my CF card.

Power Supply

A bit of a tricky choice for me. Do I cheap-out and go for more than one again, or get something a bit more expensive? Well, based on the fact that a bunch of separate PSUs didn’t work out too well for me before, space was limited and I wanted something rock-solid, I splashed out and got something substantial. At over £130, the OCZ 1000W EliteXstream was exactly that. I needed something with huge amounts of 12V current in order to power-up loads of drives (around 22) without staggered spin-up.

Many PSUs have a high amount of 12V current available, but also come with a special ‘feature’, namely split-rails. In reality this is a huge con and doesn’t provide any stability as implied. All (well, the vast majority) of PSUs with split-rails aren’t split at all. They only have one transformer for 12V, and merely put a current-limiter on each output. So if you have a PSU that claims to have four 20A rails, it’s really just one 80A rail (although sometimes it’s even less, and they don’t let you use all four to their maximum rating concurrently – another con) with four current-limited outputs.

This was really annoying for me, as I wanted all the current to be available to the hard-drives. When it is split, you find most of the current goes to PCIe connectors intended for power-hungry graphics cards. That’s not what I wanted and would have required some quite dodgy creative wiring. I was left with very little choice, but the 80A single-rail OCZ had good reviews and I’ve been very happy with it.

Hard Drives

I started with 4 x 1.5TB drives. I believe that was the largest capacity available at the time, and when you factor in the cost-per-port on the host, it makes sense to go for the largest size available, even if the cost-per-gigabyte isn’t quite as good as lower capacity drives. You must also remember that by the time you expand the array to its full size, the cost of each drive will be significantly lower, and most likely excellent in terms of cost-per-gigabyte. In a way, I suppose that may be a disadvantage of this sort of RAID set-up.

Having only 4 HDDs to start with, I didn’t need to worry about getting any SATA controllers for the moment. Quite glad I could put that off for a while, as it was my intention to get 8-port cards but they’re a bit pricey.

If I’ve forgotton anything, or you want some clarification on something, leave a comment and I’ll do my best to answer it.

This entry was posted in NAS. Bookmark the permalink.

2 Responses to My NAS Server 2 (Part III) – Hardware

  1. Chris says:

    Your public are interested in your work. Hurry up and write the next chapter.

  2. jason says:

    > It wasn’t easy to make a short-list since there’s no site (that I could find) that compares all these features.

    I feel your pain! 🙂

    I’ve recently come across skinflint.co.uk (no, I’ve no affiliation to them!) but it rocks as far as comparing hardware and features.

    Drive speeds, cost per TB, GB etc, number of slots, ports, etc on motherboards… It’s like a breath of fresh air.

    Cheers

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.