Dec 30

Over the holidays, I rebuild my CCTV server. Rather than trying to reuse the installation from the previous disk, I thought it’d be easier just to install everything fresh. Hence, I followed the instructions on the ZoneMinder Wiki.

Following on from this rebuild, I was tidying up my custom viewers which had hard-coded monitor IDs. Of course the correct way to do this would be via an API. Is there an API in ZoneMinder? Well according to the “latest” docs, it should be included since 1.27.

[code lang=”bash”]curl http://wiggum/zm/api/monitors.json
…404 Not Found…[/code]

So I checked if it was actually there or not:

[code lang=”bash”]iain@wiggum:~$ ls /usr/share/zoneminder/
ajax  cgi-bin  css  db  events  graphics  images  includes  index.php  js  lang  skins  sounds  temp  tools  views[/code]


Not sure why it’s not there, you can see it on GitHub, but here’s how I solved it until the package is updated properly. (I’m omitting the use of sudo where needed.)

[code lang=”bash”]cd /usr/src
git clone –branch release-1.28 zoneminder-1.28
mkdir zoneminder-1.28/web/api/app/tmp
chown www-data:www-data zoneminder-1.28/web/api/app/tmp[/code]

[code lang=”bash”]vi zoneminder-1.28/web/api/.htaccess[/code]
Add: RewriteBase /zm/api

[code lang=”bash”]vi zoneminder-1.28/web/api/app/webroot/.htaccess[/code]
Add: RewriteBase /zm/api

[code lang=”bash”]cd zoneminder-1.28/web/api/app/Config/
cp core.php.default core.php
cp database.php.default database.php
vi database.php[/code]
Change the database settings (host, login, database, password) to match: /etc/zm/zm.conf

[code lang=”bash”]vi /etc/apache2/conf-available/zoneminder-api.conf[/code]
Add the following:
[code]Alias /zm/api /usr/src/zoneminder-1.28/web/api
<Directory /usr/src/zoneminder-1.28/web/api>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted

Enable the conf, restart apache and you’re done:
[code lang=”bash”]a2enconf zoneminder-api
apachectl graceful[/code]

Don’t forget to enable mod_rewrite if it isn’t already:
[code lang=”bash”]a2enmod rewrite[/code]

Jan 23

We already know we’re using one of those eBuyer value cases as a starting point. The first thing to do was strip out all the internals – we just needed the bare case. It was pretty simple to dismantle as almost all of it unscrewed.

Next, the reinforcement I mentioned in the previous post. This was a 3mm x 25mm steel strip which was cut and epoxied to the bottom. Strategically placed, they raise the PMPs off the bottom slightly. Combined with 20mm standsoffs there is enough space for the connectors underneath. Those are just standard PC-modder parts wired together as specified in the diagram from the last post.

The drives are not screwed in, they just rest there – connectors down. To prevent them from falling over, there is a grid at the top. This was made by combining 1mm x 3mm steel strips with some dowels. Strangely, the dowel of this size was difficult to find at a reasonable price, so we ended up using 2mm diameter polyester fibreglass rods. For vibration dampening, the only thing is rubber washers securing the PMPs.

Holes in the front for the fans and the case is complete (almost). Unnecessary, but for the cool factor we wired up LEDs for each drive. The PMPs come with pin outs for the PMP status as well as each drive. Quite a lot of effort to connect so many LEDs (the right way round), but a good indicator when switching on the case.

Feb 05

Recently I upgraded one of my servers from Ubutnu Server 11.10 to 12.10 (via 12.04). Unfortunteatley, this broke AFP.

When connecting, I got the error “Something wrong with the volume’s CNID DB“.

I’m pretty sure I’ve had this error before, but the standard fix of  deleting .AppleDB didn’t work.

After reading a few more up-to-date tutorials and verifying my configurations, I finally sussed it.

[code lang=”bash”]root@burns:/home/iain# dbd
dbd: error while loading shared libraries: cannot open shared object file: No such file or directory[/code]

So I checked to see if I had this anywhere.

[code lang=”bash”]root@burns:/home/iain# locate libdb-4.8[/code]

No results.

[code lang=”bash”]root@burns:/home/iain# locate libdb


That was handy, so I installed it.

[code lang=”bash”]root@burns:/home/iain# dpkg -i /var/cache/apt/archives/libdb4.8_4.8.30-11ubuntu1_amd64.deb
Selecting previously unselected package libdb4.8:amd64.
(Reading database … 196079 files and directories currently installed.)
Unpacking libdb4.8:amd64 (from …/libdb4.8_4.8.30-11ubuntu1_amd64.deb) …
Setting up libdb4.8:amd64 (4.8.30-11ubuntu1) …
Processing triggers for libc-bin …
ldconfig deferred processing now taking place[/code]

And dbd started working.

[code lang=”bash”]root@burns:/home/iain# dbd
Usage: dbd [-e|-t|-v|-x] -d [-i] | -s [-c|-n]| -r [-c|-f] | -u


After deleting .AppleDB for good measure and restarting Netatalk, all was well.

I have no idea why this was missing, or whether it is the correct fix but it seems to work without side effects. If you don’t have the .deb, I guess this would also work:

[code lang=”bash”]apt-get install libdb4.8[/code]

May 29

First and foremost, I want to thank my friend Mat for helping with the design. I couldn’t have done it without you!

As you will recall from part IV, I had a second 4U case used only to house drives. This was the starting point for my NAS server 2, case #2. I would have one case for the host system (and some drives) and a second case just for drives.

If you look at the Backblaze design, you’ll see it has space for a motherboard, 2 PSUs and 46 drives (1 is for the host OS). That’s an awful lot to squeeze into a single case! It works for them as they are racked up in a data centre with deep racks. Something would have to give in my 545mm deep ‘value’ case.

Fourty-five drives is a nice number, as it gives you 3 x 15 drive arrays. I think that’s the most it’s sensible to have in a RAID 6 configuration. Fitting up-to 11 drives in the first case, I needed to cram at least 34 in case #2.

Having verified SATA port multipliers work perfectly well, I was happy to take the same approach as Backblaze with their backplanes. What I decided to think about a bit for carefully than them was the host SATA ports. They put 20 drives on the PCI bus, but I knew from experience this would degrade performance.

Since I already had a 4-port PCIe card, it made sense to use that for 4 port-multipliers. This would give me 20 ports on a PCIe x4 bus – the drives won’t all operate at maximum speed at the same time, but there’s still plenty enough throughput to saturate a gigabit LAN. This puts the SATA port count at 26 so far. Looking for 45 in total, I would need another 19. The only practical way to do this, based on the number of PCIe slots I had was another 4-port card each hosting a 5-port multiplier/backplane. So instead of cramming one standalone port-multiplier and associated drives in the main case, we decided to put them all in case #2.

Spanning power between cases didn’t seem like the best idea, and with 40 drives now needing to be catered for, 2 PSUs had to fit in the second case. Here’s a bird’s-eye and front on view of the layout. Thanks again, Mat, for coming up with it.

The black bars going left-to-right are for reinforcement – 40 hard drives weigh a lot! The other thing to note is the orientation of the drives. Keeping them that way allows efficient air-flow, from front to back.

The original intention for my 1000W PSU was to power the whole system and 22 drives. Since I’m now trying to support up to 45 drives and keep the power in each case separate, the power arrangements needed to be rethought.

I allocated the OCZ PSU to case #2 and re-purposed an old PSU I had for the main system. I calculated the amperage available for each voltage sufficient to supply 5 of the 8 port-multipliers. Maybe it would have been ‘neater’ to split it four and four, but this way, when upgrading, I wouldn’t need such a beefy supply.

Each backplane requires two molex feeds. Here’s a wiring diagram (credit once again to Mat).

Now we know pretty much what we’re building, in the next post I’ll talk about the parts and construction.

Jun 25

Most likely I found it on Engadget, but it did crop-up in quite a few other places. I am of course talking about the Backblaze blog post Petabytes on a budget: How to build cheap cloud storage.

When I found this, I thought it was fantastic – 45 drives and a host in a single case. If anything was ever going to be perfect for me, this was it!

In the first post, Backblaze were kind enough to detail all the parts used and make available a 3D model of the enclosure. This was great for the community, but in a follow-up post they directed readers as to where they could order the enclosure from directly – Protocase.

As their website directed, I emailed them straight off to get a quote. $872 – wow, way more than I was expecting. On top of this, it would have to be shipped from Canada, adding to the price substantially. Dismayed with that outcome, I thought it was the end of the matter, but Protocase emailed me a few days later asking for feedback.

As you do (or at least as I do), I replied with a 250 word rant as to how expensive it was compared to products such as the Norco RPC-4020, which was a mete $280. It must have be somewhat interesting, as I got an even longer reply direct from the Chairman addressing each of my points.

Needless to say, I didn’t go forward and purchase one of these cases, instead using the concept to design my own.

In my next post I will talk about the design along with how and why it differs from the Backblaze storage pod.

May 28

Wow, sorry for neglecting this series for so long – I got distracted by being employed! Hopefully, it’s not so long that I’ve forgotten my trail of thought.

So at the end of my last post, I had four 1.5TB hard drives in RAID 6. That would have been around December 2008, but come January 2009, I was out of space and needed to add some more. This was easy; I had 11 bays in the case, and six on-board SATA ports – I just added two drives and connected them straight up. This gave me another 3TB of useable space without any hassle.

In June 2009 I needed to upgrade again, but this time, things were a little trickier. I had no on-board ports left so had to decide how to expand. The original intention was to use 8-port PCIe cards; with space for two of these, I’d have ended up with a 22 drive maximum.

Now, I’m not sure exactly what my thought process was at the time (don’t forget this was two years ago) but I probably decided 8-port cards were either too expensive, or just wouldn’t get me enough ports in total. I ended up getting a 4-port card and four hard-drives to go with it. Great, another 6TB in the array and I was happy until October.

So what did I do next? I’d used 10 out of my 11 bays and had no more SATA ports left. [Probably] being desperate for space,  I just ordered another of the 4U cases that hold 11 drives. Seeing as how my 4-port PCIe card supported them, the cheapest way to get extra SATA ports was to use a SATA port-multiplier. I gave it a go and £40 got me 5 ports, but obviously I had to sacrifice one from the PCIe card.

The PMP was very successful, although I did have to disable NCQ to get it stable. This isn’t necessary anymore, so I won’t go into any further detail. Just to keep track, at the end of October 2009 I had 12 x 1.5TB drives in my RAID 6 array.

In the next installment, I’ll explain where the insparation came from.

Feb 14

Having filled up my first RAID volume, I had to add a new mount and symlink the extra directories into the network share point.

This worked absolutely fine for AFP, but when accessing the share in Windows via SMB, the symlinked folders were inaccessible. Fortunately, there is a simple solution.

Just add this to the [global] section of your /etc/samba/smb.conf.

[code lang=”bash”]follow symlinks = yes
wide links = yes
unix extensions = no[/code]

There are apparently security considerations why this is disabled in the first place, but nothing I (think I) need to worry about.

Nov 13

Having run completely out of space, I was all ready to start my next array with 2TB hard drives. Surprisingly, the same day, I read on Engadget that Western Digital had started shipping 3TB drives.

With the cost overhead of housing each drive, and the limit on the number of drives I can accomodate, using the highest capacity drive available for an array is the best option.

Now, I haven’t yet explained how My NAS Server 2 evolved, but to give you a sneak-peak – it uses SATA port-multipliers. This brings me to the point of this post…

The Engadget post regarding these drives seemed to indicate there could be some issues seeing all the space without special drivers. After doing a bit of research, I think there is only a problem when you are trying to boot from the drive. As my NAS boots off a CF card, it’s a non-issue for me. More importantly, I read a conversation on the linux-raid mailing list which seems to indicate there shouldn’t be a problem with Linux, or the port-multipliers supporting these drives right out the box.

This is good news as I will be able to use them to build my next array. Unfortunatly, they are vastly more expensive than 2TB drives, and on top of that, they don’t seem to be available in the UK quite yet. Hopefully I will find a way round my lack-of-space until I can get my hands on a couple.

May 21

As promised, here’s my initial hardware for this NAS. Later posts will discuss how I went further and what hardware needed to change to accommodate.


A tough choice I had to make right at the beginning of the build list was the motherboard. I really wanted something with at least 8 SATA ports, two PCIe x 4 slots and integrated graphics. The two PCIe slots with 8-port SATA cards, plus the 8 on-board ports would have given me 24 ports total, enough for 2 x 12 disk arrays. Not bad.

It turns out that 8 SATA ports is very rare, and the sort motherboard that has lots of PCIe slots doesn’t have integrated graphics. It wasn’t easy to make a short-list since there’s no site (that I could find) that compares all these features. Even manufacturers website don’t list the features side-by-side. I ended up just ploughing through the sites and noting down which ones were possibilities. Of course, cost played a big factor in my choice.

This is the motherboard I ended up with: Gigabyte GA-MA78G-DS3H. Bear in mind this was 18 months ago so it is probably a lot easier to find an appropriate choice now.


  • Integrated graphics with VGA. VGA is important to me (as is PS/2) because it allows me to run it through a KVM switch with the rest of my servers. This motherboard also happens to have HDMI – talk about overkill for a headless server running a CLI.
  • Gigabit LAN. I expected all current model to have Gigabit LAN so didn’t worry about this when searching. Gigabit LAN is a must for a NAS, but dual-ports doesn’t help. It also happens to be on a PCI Express bus, so won’t clog up the internal bandwidth available like My NAS Server 1 did.
  • Three PCIe x 1 slots, two x16 length slots (electrically – one x16 + one x4) and 2 PCI slots. I only intended to use the x16 slots, and while the one that is actually x16 electrically is designed for a discrete graphics card, it seems to work fine with a SATA controller. That something you should verify before buying, as some motherboards have booting problems if there’s a non-graphics card in that slot.
  • Six SATA ports. Unfortunately I had to trade off the SATA ports for cost. There were a few that had 8 SATA ports, but not decent PCIe slots, and some that had both of those, but not integrated graphics. I probably could have got what I wanted for ~4x what I paid (~£70).
  • One IDE connector. I needed this for the boot disk using an IDE to Compact Flash adaptor.


I guess the reason I chose AMD over Intel was the cost. There were a couple of options for both, and I’m happy either way, so really, it just came down to the price. AMD systems (in my experience) tend to be cheaper. There’s no` need for a really fast processor in a dedicated NAS, as there will be other bottlenecks (the LAN connection). Quad-core doesn’t help as Linux-RAID isn’t multi-threaded, but dual-core has it’s uses because the second core can deal with network protocols and other overheads that go with filing.

That made it an easy choice – dual-core and the best value in terms of megahertz. I wasn’t bothered by power-consumption so ended up with a 2.6GHz AMD Athlon 64 X2 5000+. 64-bit was essential due the XFS volume size limit mentioned in My NAS Server 1, but pretty much all CPUs are 64-bit now, anyway.


RAM is pretty cheap, so I went for 4GB of bog-standard stuff. You don’t need much RAM to run a NAS (Linux), but having a bunch spare allows me to put some temporary filesystems in RAM which reduces wear on my CF card.

Power Supply

A bit of a tricky choice for me. Do I cheap-out and go for more than one again, or get something a bit more expensive? Well, based on the fact that a bunch of separate PSUs didn’t work out too well for me before, space was limited and I wanted something rock-solid, I splashed out and got something substantial. At over £130, the OCZ 1000W EliteXstream was exactly that. I needed something with huge amounts of 12V current in order to power-up loads of drives (around 22) without staggered spin-up.

Many PSUs have a high amount of 12V current available, but also come with a special ‘feature’, namely split-rails. In reality this is a huge con and doesn’t provide any stability as implied. All (well, the vast majority) of PSUs with split-rails aren’t split at all. They only have one transformer for 12V, and merely put a current-limiter on each output. So if you have a PSU that claims to have four 20A rails, it’s really just one 80A rail (although sometimes it’s even less, and they don’t let you use all four to their maximum rating concurrently – another con) with four current-limited outputs.

This was really annoying for me, as I wanted all the current to be available to the hard-drives. When it is split, you find most of the current goes to PCIe connectors intended for power-hungry graphics cards. That’s not what I wanted and would have required some quite dodgy creative wiring. I was left with very little choice, but the 80A single-rail OCZ had good reviews and I’ve been very happy with it.

Hard Drives

I started with 4 x 1.5TB drives. I believe that was the largest capacity available at the time, and when you factor in the cost-per-port on the host, it makes sense to go for the largest size available, even if the cost-per-gigabyte isn’t quite as good as lower capacity drives. You must also remember that by the time you expand the array to its full size, the cost of each drive will be significantly lower, and most likely excellent in terms of cost-per-gigabyte. In a way, I suppose that may be a disadvantage of this sort of RAID set-up.

Having only 4 HDDs to start with, I didn’t need to worry about getting any SATA controllers for the moment. Quite glad I could put that off for a while, as it was my intention to get 8-port cards but they’re a bit pricey.

If I’ve forgotton anything, or you want some clarification on something, leave a comment and I’ll do my best to answer it.

May 19

When I started this NAS, I thought I was going to be using something like the Norco RPC-4020. This is a sever case with 20 hot-swappable SATA bays, so it’s an ideal form factor for a NAS.

Having been made aware of it on a US forum, I encountered the dollar price at first. Typically around $280 (~£190) I thought it was quite a good deal. I searched high and low for it in the UK, but no-one seemed to stock it. I did manage to find a site selling what seemed to be the same hardware, but re-branding it as their own. To my shock and horror it was in excess of £350.

I went back to the US sources to see if it was plausible to import one. A site I found, along with a few eBay sellers would send it across the Atlantic, but that would cost as much as the case itself. By the time you’ve added VAT to the price on import, it ends up being more or less the same price as the UK website.

There were a few people at AVSForums trying to organise a bulk-buy to get the price down, but when that came to nothing, I thought I’d contact Norco myself. It turns out that the wholesale price was just $10-$20 lower than the retail price at places like I don’t know why they had such a slim margin on it, maybe they used it to get customers to buy the PC hardware and hard drives at the same it. This pretty much made me give up hope on a decent rack-mount case with hot-swap bays.

Getting desperate for space, I went to the easy solution. I had found a cheap 4U case that holds up to 11 hard drives, plus I could get hold of it fairly quickly. It was even on offer at the time (~£5 off), although now it looks like the regular price is less than what I first paid anyway.

My next post in this series will discuss the hardware I put in the case.

preload preload preload