May 16

After starting the series of posts My NAS Server 2, I thought I should take a few moments to write about the experiences of my first NAS server. It should help justify the decisions I made with second NAS you may otherwise have thought I’ve overlooked. I felt it would be sensible to go down the route of a NAS after I filled up my first 1TB external hard-drive very quickly.

Please bear in mind I wrote this as a collection of thoughts and memories from the last three years – it may not all be coherent.

Type of set-up

Originally, I thought there were two routes I could go down:

  1. Pre-built NAS device.
  2. Build-you-own NAS.

I pretty much instantly ruled out option 1 due to cost. The price of these devices (e.g. Drobo) was extortionate compared to using a low-spec PC. Pre-built devices do have their advantages, e.g. being quiet and stylish, but I am fortunate enough to be able to hide mine away in the loft, so those don’t matter to me. Once I had decided to build a NAS, one of the early decisions was…

Hardware or software RAID

Some of you may be thinking “Hold on, why are you jumping into RAID at all?”. Well I’m afraid I don’t really have a very good answer to that – It just ‘made sense’ to me. If you really want a reason, how about it gives you a single volume?

Before I explain my reasoning, let me just get ‘fake-RAID’ out of the way. Some people think they have free hardware RAID built into their motherboard – this isn’t wholly correct. While RAID 0 and 1 (and maybe even 0+1) can be handled in the motherboard hardware, RAID 5 or 6 never is. It actually uses a software layer running in Windows to handle the parity etc, which eats CPU cycles. That is why it is fake-RAID, and I’m ruling it out.

So now I’ve got two real choices, which is the best? For me, software RAID. Why? Cost. Hardware RAID controllers are very expensive and are limited in the number of drives they support. As far as I’m concerned, hardware RAID is for enterprise where price is a non-issue and performance is everything. This server was for cheap, mass storage.

Another problem with hardware RAID is that if the controller dies, your array goes with it. It may be possible to replace the controller with one of the exact same model and firmware to get the array back up, but there’s no guarantees, and again, it’s very expensive. Plus, by the time the controller breaks, you probably won’t be able to get another one the same, because they’ll have been replaced with newer models by then. Ideally, you’ll buy two from the offset and leave one in the box on a shelf – very expensive.

Software RAID allows for any hardware, and even a complete change of hardware while keeping the array intact. This makes it very flexible and a good choice for me. Now on to the variants…

unRAID

unRAID is a commercial OS that you put on your PC to turn it into a NAS. It’s designed to run off a USB stick, so that saves a SATA port but it has its limitations. When I built my first NAS, I believe there was a maximum of 10 (or so) drives allowed in an array. Now it looks like they have a ‘Basic’ version (free) that supports 3 dives, a ‘Plus’ version ($69) with up to 6 drives, and a ‘Pro’ version ($119) that supports 20 drives. The main feature of this OS is that it provides redundancy, but allows different sized drives. Quite nice for some people, but not what I want or need. Add to that the price and I’m not interested.

FreeNAS

This was actually my chosen option to begin with, but I didn’t stick with it for long. Again, this is a pre-configured OS designed to go on a memory stick. It has the advantage of being free and easy to configure. Unfortunately, I had very little success when using it to create a RAID – it was too unstable and hence unusable. On top of this, I don’t think it even supported RAID 6.

Linux-RAID

Configured with mdadm, this is the option I turned to when FreeNAS failed me. You can use it on any flavour of Linux – I went for Ubuntu Server as it was a popular choice at the time and was well supported. I decided not to use the desktop version not only because, well, it’s a server, but there was no need for the overhead of a GUI that I would never see. It’s also true that most of the configuration isn’t possible through a GUI, so I’d have been firing up the CLI either way.

Linux-RAID is very advanced, stable and flexible. It’s easy to add drives and supports RAID 5, 6 and more. That makes it the perfect choice for me. While it’s designed to be run off a normal hard-drive, it can be run off a USB stick. I chose to use a compact-flash to IDE adaptor as this kept the SATA ports free and solid-state memory tends to be more reliable than mechanical hard drives. I just want to point out at this point, even if the OS did become un-bootable, it doesn’t matter to the array in an way, and it can be re-assembled by any other variant of Linux – even a live-CD.

Windows / Home / Server

No thanks. OK, I’ll elaborate – I don’t trust it, I don’t like it. What, you need a better answer? Fine…

Windows has limited RAID functionality so that kills it right there. It does have a ‘duplication’ utility where it will make a second copy of selected data, but that doubles the space required for where you want redundancy. It’s quite good in that it will manage your data across multiple drives, but I’ve already decided to take a proper RAID approach. Also, you start running into problems if you have large filesystems. You need to ensure you set appropriate cluster sizes etc. Not sure if you can add capacity on-the-fly, either. Think I’ll give that a miss.

ZFS

This wasn’t really an option back when I constructed my first NAS but I’ll talk about it here for the sake of completion.

ZFS is a filesystem designed by Sun and includes the option of redundancy. It’s free to use and runs on OpenSolaris and a few others. Running only on those OSs, the hardware support becomes limited, which is not great when you’re trying to use (unbranded) consumer hardware to keep costs down. But that’s not the only problem with it. ZFS’s redundancy is called ‘RAID-Z’, which to all (our) intents and purposes is the same as RAID 5. There’s also RAID-Z2, which is equivalent to RAID 6. The problem with this, is you can’t expand the capacity once it’s been created. I would probably recommend ZFS if your hardware happens to be compatible with OpenSolaris and you know for sure that you don’t want to change the capacity of your NAS. Something tells me that’s not a common starting point, which is a real shame. Note: expanding the capacity of a RAID-Z pool is a feature expected to be added to ZFS in the future, so it won’t be useless forever.

Hardware

With the point of the server to be cheap, I ordered two basic PCs off eBay for <£100 each, rather than specing out a new PC. On top of that, I got a 128MB USB key (as I intended to use FreeNAS) and a Gigabit LAN card. After low success with that hardware, I ended up buying  a used Motherboard + CPU (3GHz P4) + RAM from a friend for £90. It had built-in Gb LAN, 5 PCI card slots and 4 on-board SATA ports. Using 5 x 4-port SATA cards, I gained a total of 24 SATA ports.

This wasn’t a particularly bad set-up. It gave reasonable speeds of 40MB/s up & down over the network. Bearing in mind the Ethernet port was on the same bus as the SATA PCI cards (133MB/s theoretical maximum), it’s pretty much as high as you could expect.

As for casing and powering 24 drives – that was a little tricky. Casing turned out to be pretty hard – there aren’t really any hard drive racks with a reasonable price-tag. In the end I decided to build my own. All it consisted of was two sheets of aluminium with the drives screwed between them and a few fans attached. Here’s a picture:

Hard Drive Rack

DIY hard-drive rack.

For powering the drives, I could have tried one humungous supply, but firstly they’re very expensive and secondly, I’m not sure you can even get one that will do 24 drives. Powering on 24 hard drives at the same time can use up to 70A of 12-volt current and a huge amount of sustained 5V. Ideally you’d use staggered spin-up, but this is only a feature of high-end SATA cards and doesn’t work with all drives anyway. I went for powering 4 hard drives off each PSU which meant I needed 6 in total (plus one for the system). At only ~£5 each, it was still a really cheap option. Check this post for how to power-on a PSU without connecting it to a full system.

The full list with prices can be found in this Google Spreadsheet.

Filesystem

Seeing as how I was using Linux, the obvious (and my) choice of filesystem was ext3. I didn’t really think too hard about it. Ext3 is a mature and stable filesystem which works very well under Linux. One of the most annoying things about that filesystem is that deleting a large number of files takes a (very) long time. It also has a maximum volume size of 16TB, not that that was a problem for this NAS. Another issue I found out later is that it reserves 5% of the space for system usage. When it’s not the boot volume that’s completely unnecessary, but there is a way to disable it when you create the filesystem (so long as you remember to).

When I switched from RAID 5 to RAID 6 (see timeline below), I took the opportunity to change filesystems. I looked at few recommendations for Linux including JFS, which is fast and stable but ultimately went with XFS. XFS has the reputation of being fast for large files which is appropriate for this NAS, as well as being very stable. The maximum volume size is 16 exabytes (on a 64-bit system) and you can grow the filesystem when you need to (although you can’t shrink it – not that I can see why you’d want to). It certainly sped up file deletion and I’ve not had any problems with it. A good choice for me.

A new option on the table is ext4. This wasn’t available when I built this NAS as it’s just recently been added to the kernel and considered stable. I have since used it in another server and it seems fine (it has the same space reservation issue as ext3) but I’m still not sure that it’s stable enough to trust with my data.

Around the corner is Btrfs. This is what could be described as Linux’s answer to ZFS. While they are both free to use, ZFS has licensing issues that makes it impractical to run on Linux (has to run in user-space). Btrfs is a while a way from being suitable for production use, but it’s one to keep your eye on.

Timeline

I set up the array with 8 x 500GB hard drives in RAID 5 – this gave me 3.5TB of usable space. As I wanted to switch to RAID 6 (due to the number of drives in a single array) I had to add another 8 drives at the same time when I first wanted to expand capacity. I created the RAID 6, copied the data from the RAID 5, dismantled the RAID 5 and added the drives to the new array.

Expanding the array wasn’t as easy as I had first imagined. Growing a RAID 6 array was quite a new feature at the time, and was not available in any kernel pre-compiled and packaged for Ubuntu Server. This meant compiling my own array which was an interesting endeavour. It worked out flawlessly in the end, but that’s getting a little off-topic. Anyone trying to grow a RAID 6 now shouldn’t have that problem as it’s been a standard feature of Linux kernels for a couple of years now. On top of that, it’s also now possible to convert a RAID 5 to RAID 6. This is a recently added feature so may be a little risky to try on important data. It also uses a non-standard RAID 6 layout, with the second set of parity all being on one (the new) disk.

Once that capacity was used up, I added two sets of 4 disks and grew the array each time. This took me to a total of 24 x 500GB disks in RAID 6 – 11TB of usable space. All was well – here’s a picture of what it looked like:

NAS Server 1

My first NAS in its final state.

This is where I started to run into problems. Disks started to fail, which in it’s own right isn’t a problem, even two can fail so long as you can get them replaced (and re-built) before any others fail. But eventually I was in a situation where the array was degraded and could not assemble its self. I did manage to force the array back online after dding one of the failing disks to a new drive, but only long enough to get as much data off as I could fit on my new NAS. The disk I used as a temporary fix was actually out of the new NAS and when I replaced it with a 500GB one again, the rebuild failed. Instead of putting the new drive back, I decided to force the array back up by creating a new array with the components from the old array. I’ve probably lost some people – you’ll need some extensive experience with Linux-RAID to know what I’m on about. As this is a tricky and risky procedure, it didn’t work. The data was gone and I learned from my mistakes. The biggest problem was that I had no head-room in terms of connecting spare disks – all 24 SATA ports were used up with the single array. If I had put more time, effort and money in, I could have saved it. The hardware didn’t fail beyond all repair, I just didn’t choose the safest way to recover. 24 drives in one array is risky, will most likely provide you with headaches, but is certainly obtainable.

That marks the end of my first NAS, which brings us to My NAS Server 2.

7 Responses to “My NAS Server 1”

  1. jason says:

    Thanks for the write-up.
    I think I’ve had the same experience as you concerning getting large server cases with space for many HDDs into the UK: the prices are extortionate!

    Anyhow, just a few things:

    1) My understanding of UNRAID is that there is no redundancy. It goes against the philosophy of the solution really. It provides some level of protection by having a parity drive, but that’s not the same as have redundant data or a disk. It allows data recovery in the event of a failure by calculating the values of the lost data.

    2) ZFS has supported growing the size of pools (not raidz devices) since inception AFAIK. You can’t grow the size or a raidz device, but normally you wouldn’t need to: that device would be part of a pool and if you need more space you just add more disks (or raidz devices, or mirrors etc) to that pool.

    I’ll soon be going this way myself. I’ve been doing a lot of research lately and will probably go with ZFS running on FreeNAS or NexentaStor which will allow me hot spares.

    Thanks for the post!

  2. Iain says:

    I think unRAID is good in that respect as it’s more flexible, and as far as I can tell produces the same results as RAID 5.

    You’re right about the capability of ZFS, but I disagree about not needing to grow a RAIDZ.

    If for example you create a RAIDZ device with 4 disks, but are happy with the redundancy RAIDZ provides with using 6, you don’t have the option to add them later when you need them and they are cheaper.

  3. ryan says:

    Can anyone tell me how he has got 24! drives on that, is it a modified pc or what ?

  4. Iain says:

    “…4 on-board SATA ports. Using 5 x 4-port SATA cards, I gained a total of 24 SATA ports.”

    All the ports have a small extension to make the socket external.

  5. gecko says:

    I know that this is a 6 year old post at this rate, but for anyone else reading, your raid was too big, you should have made smaller pools with consumer level drives. The biggest pool of drives you want is 8 data + redundancy drives. Ex. 10 drive raid6 or raidz2 wiil have 8 data drives and 2 redundancy drives. Yes you can go bigger, but if one drive fails your rebuild time will be too big and you risk losing more data.

    Also your rack is a nice idea, but I saw nothing to help damping drive vibrations. If you put down 3-5 mm sheet of rubber padding it should help some of the vibrations.

    unRaid, is a snapshot type raid, there is a live version of it now, but it does not protect from bit rot like ZFS. mdadm is a good software raid, but it also suffers from bad block protections and time out. Basically, ZFS is king here.

  6. Iain says:

    I agree that is about the maximum number of drives you want in an array. It becomes very unstable after that. Now it’s mature enough, I’d definitely go with ZFS too.

    Regarding dampening, the only thing my NAS Server 1 had was rubber washers on the screws that attached the drives to the aluminium sheets.

  7. Jinxy says:

    It costs 50% but it’s incredibly much safer – one big RAID 10, much less wear and gets faster as it grows without instability gains, it’s just a stripe of individual chained mirrors. Parity based makes less sense every day that goes by. A single consumer drive is usually rated for 10^14/1E14 (12 TB) reads before encountering an unrecoverable read error. And every time you need to replace a degraded parity based (5 and 6) RAID, you need to read every single drive.

    My only question in this post is wth those sata cables attach to on your case.

Leave a Reply

preload preload preload