Feb 18

I have been maintaining my own SVN server for many years, but in order to share code and make use of managed services, it was time to migrate some of my repositories to Git.

There are many tutorials for svn-git migration, here’s a customised script which works for me: migrate-svn-to-git.sh. Note, it requires an authors.txt file – there any plenty of adequate resources out there for how to create one.

There is nothing specific about this script for Google Source Repos, that is just my choice for private code so it can make use of Google Cloud Build and gcr.io.

#!/usr/bin/env bash


if [[ -z "${svnUri}" || -z "${gitUri}" ]]
; then
echo "[ERROR] Usage migrate-svn-to-git.sh <svn_uri> <git_uri>" 1>&2
exit 1

if [[
! -f authors.txt ]]; then
echo "[ERROR] authors.txt is missing" 1>&2
exit 1

echo "[INFO] Cloning from SVN: ${svnUri}"
git svn --authors-file=authors.txt clone -s ${svnUri} --prefix svn/ migration
cd migration

echo "[INFO] Creating Git tags"
t in $(git for-each-ref --format='%(refname:short)' refs/remotes/svn/tags); do git tag $(echo ${t} | sed 's^svn/tags/^^') ${t} && git branch -D -r ${t}; done
echo "[INFO] Creating Git branches"
b in $(git for-each-ref --format='%(refname:short)' refs/remotes/svn | grep -v trunk); do git branch $(echo ${b} | sed 's^svn/^^') refs/remotes/${b} && git branch -D -r ${b}; done

echo "[INFO] Creating .gitignore file"
git svn show-ignore > .gitignore
git add .gitignore
git commit -m "Added .gitignore"

echo "[INFO] Pushing to Git: ${gitUri}"
git remote add google ${gitUri}
git push --force google --tags
git push --force google --all

cd -

echo "[INFO] Removing migration directory"
rm -rf migration

Once you have created the Git repo, you need to switch your working copy of a project from SVN to Git. Here’s a script: switch-svn-to-git.sh

#!/usr/bin/env bash


if [[ -z "${gitUri}" ]]
; then
echo "[ERROR] Usage switch-svn-to-git.sh <git_uri>" 1>&2
exit 1

if [[
! -d .svn ]]; then
echo "[ERROR] Not a SVN project" 1>&2
exit 1

echo "[INFO] Finding current branch"
svnBranch=$(svn info | grep '^URL:' | egrep -o '(tags|branches)/[^/]+|trunk' | egrep -o '[^/]+$')
echo "[DEBUG] ${svnBranch}"

echo "[INFO] Cleaning SVN directory"
rm -rf .svn

echo "[INFO] Initialising Git from: ${gitUri}"
git init
git remote add origin ${gitUri}
git fetch

echo "[INFO] Saving working copy"
git show origin/master:.gitignore > .gitignore
git checkout -b tmp
git add .
git commit -m "local changes"

echo "[INFO] Checking out branch"
if [[ $
{svnBranch} != "trunk" ]]; then
git checkout ${gitBranch}

echo "[INFO] Merging working copy"
git merge --allow-unrelated-histories -X theirs --squash tmp
git branch --delete --force tmp

echo "[INFO] Deleting IntelliJ reference to SVN (if exists)"
test -f .idea/vcs.xml && rm .idea/vcs.xml

echo "[INFO] Done - you may want to set your name / email with: git config user.email <email>"
Mar 25

Having upgraded to a UHD TV last year, it was time to get a new AV receiver to match. It’s been a long time since I was in the AVR market and a lot has changed.

After doing the research at the end of last year, I thought I’d share the take-aways in this how-to guide.

Since I got my first AVR, two sets of HD codecs became commonly available. The first set, mainly used on Blu-ray are:

  • DTS-HD Master Audio
  • Dolby TrueHD

These are very well supported now so you don’t need to worry whether an AVR will be able to decode them.

The more recent set, most likely to be found on 4K Blu-rays consist of:

  • DTS:X
  • Dolby Atmos

Dolby Atmos is actually fairly commonly available, e.g. BT Sport, Sky Q (Sport and Cinema) and Netflix, so it’s important to ensure your AVR will support these latest codecs if you want any kind of future-proofing.

They are now fairly well established so it’s not too difficult to find support.

The last thing to mention about audio is eARC. This is the Enhanced Audio Return Channel finalized in the 2.1 HDMI spec. It allows the newer high definition codes, such as Dolby Atmos and DTS:X to be passed back from the TV to your AVR. If you really want to future-proof, consider this feature. Personally, I don’t find it necessary – all my equipment is attached to the TV through the AVR so there is little reason to send the sound back. The obvious exception to this is when you have a smart TV that you use for streaming Netflix or other services. Since regular ARC supports the first generation high definition codecs, just not the new “3D” ones, I’m happy to live with that – especially since I don’t have a TV that supports it!

ARC is important, but common enough that you don’t need to look out for it.

Now for video – there are basically two things you need to look out for.

Firstly, HDMI 4k Ultra HD 60Hz with HDCP 2.2 compatibility. This is essential but not too difficult to find on recent models.

Secondly, and a little more obscure is HLG. Standing for Hybrid Log Gamma, this is the HDR format conceived by the BBC and NHK which will most likely be used by broadcasters in the UK (BBC / iPlayer, Sky Q). There is little content right now, but if you want to see HDR content in the future, you’d better make sure your AVR has HDR pass-through (HDR10, Dolby Vision and HLG).

I ended up choosing the Denon AVR-X1400H. This was a good combination of minimum requirements, price and number of HDMI inputs.

You can find my comparison / shortlist here, though the pricing and models will start to get out of date quite quickly.

Dec 03

While migrating one of my hobby projects from the PHP mysql extension to PDO, I came across this error:

PHP Fatal error:  Uncaught exception 'PDOException' with message 'SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active.  Consider using PDOStatement::fetchAll().  Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute.'

A quick search on the web suggested this happens when you don’t fetch all rows from a query. I knew this wasn’t the case and didn’t want to just enable the buffered query attribute as I felt something else was wrong.

Turns out this problem came about as I was trying to migrate my MySQL connection properties, previously defined with:

[code lang=”php”]define(‘UTC_OFFSET’, date(‘P’));
mysql_query(“SET time_zone='” . UTC_OFFSET . “‘;”);
mysql_query(“SET NAMES ‘utf8’ COLLATE ‘utf8_general_ci;'”);[/code]

The natural change was to add these two statements to PDO::MYSQL_ATTR_INIT_COMMAND (separated by a semicolon). However, that’s where the problem is. The SET command allows both to be specified at once, hence the right way of doing it is:

[code lang=”php”]PDO::MYSQL_ATTR_INIT_COMMAND => “SET NAMES ‘utf8’ COLLATE ‘utf8_general_ci’, time_zone = ‘” . UTC_OFFSET . “‘”[/code]

Credit: Stack Overflow

Bonus: Setting the timezone with a UTC offset allows you to use a zone that PHP knows about, but the MySQL server doesn’t. That way it can be set with the ini setting date.timezone or date_default_timezone_set and doesn’t need to be modified in two places if it needs to be changed.

Oct 02

After upgrading to FreeBSD 9.3, my custom postfix installation was overwritten with postfix-current-3.2.20160925,4. This caused the following entry in my maillog:

Oct 2 19:55:04 myhostname postfix/master[1481]: warning: process /usr/local/libexec/postfix/smtp pid 90855 exit status 1
Oct 2 19:55:04 myhostname postfix/master[1481]: warning: /usr/local/libexec/postfix/smtp: bad command startup -- throttling
Oct 2 19:56:04 myhostname postfix/smtp[90864]: warning: unsupported SASL client implementation: cyrus
Oct 2 19:56:04 myhostname postfix/smtp[90864]: fatal: SASL library initialization

Luckily it was a quick fix as the previous bug has been fixed.

$ sudo pkg install postfix-current-sasl

This will uninstall postfix-current and install a version with sasl support which can be verified as follows:

$ postconf -a

Jan 26

Ever tried sending mail from your own server and ended up on the Spamhaus Policy Block List (PBL)? I have, a couple of times.

From my FreeBSD sever, I get daily emails; “daily run output” and “security run output”. I’m not particularly interested in these, but it’s important that I get SMART notifications.

What I noticed is that whenever my IP address changes (every couple of months) the emails start coming through again. If you look at the mail log (/var/log/maillog), you’ll see something like
Dec 24 03:14:24 myhostname postfix/smtp[6740]: 9B8A5CB111C: to=<user@example.com>, orig_to=<user>, relay=smtp.examplehost.net[x.x.x.x]:25, delay=43, delays=0/0.22/21/21, dsn=5.0.0, status=bounced (host smtp.examplehost.net[x.x.x.x] said: 550-"JunkMail rejected - me.exampleisp.com (myhostname.mydomain.net) 550-[x.x.x.x]:63909 is in an RBL, see 550 https://www.spamhaus.org/query/ip/x.x.x.x" (in reply to RCPT TO command))

So nicely, they’ve given a link to lookup why you’re blocked. In my case it’s due to using an unauthenticated mail port (25). Since I had this problem before, I though I had setup authentication but in the FAQ it’s very clear that you cannot get this error if you have authentication enabled.

So what else does does it say in the mail log? Well, just before that line:
Dec 24 03:13:54 myhostname postfix/smtp[6753]: warning: smtp_sasl_auth_enable is true, but SASL support is not compiled in

This is a problem with the Postfix package for FreeBSD. I can’t find any other way around it other than to compile Postfix yourself from Ports. Here’s a link to the (unresolved) request in FreeBSD Bugzilla to add a package with it compiled in. So if you’re looking for a FreeBSD package with SMTP SASL authentication – give up (unless that bug has been resolved since the time of posting).

I’m not going to go into details of how to set up the SMTP + SASL on FreeBSD, there are already many guides that do that. I found these helpful:

One other small roadblock I ran into was trying to use port 465. When checking my web host, they specify that port of authenticated SSL, but I found the following error in the logs:
CLIENT wrappermode (port smtps/465) is unimplemented
instead, send to (port submission/587) with STARTTLS
status=deferred (lost connection with smtp.examplehost.net[x.x.x.x] while receiving the initial server greeting

My host didn’t give any indication that it supported SMTP on port 587 and if you search for this problem on the web you’ll find many people trying to solve it with stunnel. A quick telnet to my host on 587 showed it was open, so I just gave it a try. Lo and behold Postfix authenticated and sent my mail. So give port 587 a try even if you’re recommended to use 465 for your relayhost provider.

Dec 25

It’s Christmas time, which is when I get a chance to upgrade my home servers. This year one of them needed a double-upgrade: 14.10 to 15.04 to 15.10.

After backing-up (which is essentially a tar command, more on that in another post), I proceeded with the first upgrade.

Everything seemed to go smoothly but on boot, the system dropped into an emergency mode shell.

At least it showed a couple of errors:

  1. acpi pcc probe failed
  2. error getting authority error initializing authority

After a quick search, I found that the first is actually just a warning and can be ignored (source). If I had more time, I would consider fixing the root cause.

The second one was a little more tricky. The error doesn’t really indicate the real problem but a few people have found it to be caused by an invalid fstab entry.

In the recovery shell, run the following command to find which one (thanks):

[code lang=”bash”]journalctl -xb[/code]

For most people, it’s due to having a UUID specified that no longer exists (either due to reformatting a drive or removing the disk all together). In my case it was because I had several tmpfs mounts which are no longer allowed:

tmpfs           /tmp           tmpfs   defaults,noatime           0 0
tmpfs           /var/lock      tmpfs   defaults,noatime           0 0
tmpfs           /var/run       tmpfs   defaults,noatime           0 0
tmpfs           /var/tmp       tmpfs   defaults,noatime           0 0


The error can be alleviated either by adding nofail to the options or just removing / commenting out the mount.

The reason I used a tmpfs (RAM disk) for /tmp and others was when I originally set up this server (and some others) the boot disk was an 8GB CompactFlash card. Temp files would create undesirable wear on the device but I’ve since moved this system to an SSD so that’s not an issue anymore. Hence, I just deleted the entries.

Dec 30

Over the holidays, I rebuild my CCTV server. Rather than trying to reuse the installation from the previous disk, I thought it’d be easier just to install everything fresh. Hence, I followed the instructions on the ZoneMinder Wiki.

Following on from this rebuild, I was tidying up my custom viewers which had hard-coded monitor IDs. Of course the correct way to do this would be via an API. Is there an API in ZoneMinder? Well according to the “latest” docs, it should be included since 1.27.

[code lang=”bash”]curl http://wiggum/zm/api/monitors.json
…404 Not Found…[/code]

So I checked if it was actually there or not:

[code lang=”bash”]iain@wiggum:~$ ls /usr/share/zoneminder/
ajax  cgi-bin  css  db  events  graphics  images  includes  index.php  js  lang  skins  sounds  temp  tools  views[/code]


Not sure why it’s not there, you can see it on GitHub, but here’s how I solved it until the package is updated properly. (I’m omitting the use of sudo where needed.)

[code lang=”bash”]cd /usr/src
git clone https://github.com/ZoneMinder/ZoneMinder.git –branch release-1.28 zoneminder-1.28
mkdir zoneminder-1.28/web/api/app/tmp
chown www-data:www-data zoneminder-1.28/web/api/app/tmp[/code]

[code lang=”bash”]vi zoneminder-1.28/web/api/.htaccess[/code]
Add: RewriteBase /zm/api

[code lang=”bash”]vi zoneminder-1.28/web/api/app/webroot/.htaccess[/code]
Add: RewriteBase /zm/api

[code lang=”bash”]cd zoneminder-1.28/web/api/app/Config/
cp core.php.default core.php
cp database.php.default database.php
vi database.php[/code]
Change the database settings (host, login, database, password) to match: /etc/zm/zm.conf

[code lang=”bash”]vi /etc/apache2/conf-available/zoneminder-api.conf[/code]
Add the following:
[code]Alias /zm/api /usr/src/zoneminder-1.28/web/api
<Directory /usr/src/zoneminder-1.28/web/api>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted

Enable the conf, restart apache and you’re done:
[code lang=”bash”]a2enconf zoneminder-api
apachectl graceful[/code]

Don’t forget to enable mod_rewrite if it isn’t already:
[code lang=”bash”]a2enmod rewrite[/code]

Jan 23

We already know we’re using one of those eBuyer value cases as a starting point. The first thing to do was strip out all the internals – we just needed the bare case. It was pretty simple to dismantle as almost all of it unscrewed.

Next, the reinforcement I mentioned in the previous post. This was a 3mm x 25mm steel strip which was cut and epoxied to the bottom. Strategically placed, they raise the PMPs off the bottom slightly. Combined with 20mm standsoffs there is enough space for the connectors underneath. Those are just standard PC-modder parts wired together as specified in the diagram from the last post.

The drives are not screwed in, they just rest there – connectors down. To prevent them from falling over, there is a grid at the top. This was made by combining 1mm x 3mm steel strips with some dowels. Strangely, the dowel of this size was difficult to find at a reasonable price, so we ended up using 2mm diameter polyester fibreglass rods. For vibration dampening, the only thing is rubber washers securing the PMPs.

Holes in the front for the fans and the case is complete (almost). Unnecessary, but for the cool factor we wired up LEDs for each drive. The PMPs come with pin outs for the PMP status as well as each drive. Quite a lot of effort to connect so many LEDs (the right way round), but a good indicator when switching on the case.

Feb 05

Recently I upgraded one of my servers from Ubutnu Server 11.10 to 12.10 (via 12.04). Unfortunteatley, this broke AFP.

When connecting, I got the error “Something wrong with the volume’s CNID DB“.

I’m pretty sure I’ve had this error before, but the standard fix of  deleting .AppleDB didn’t work.

After reading a few more up-to-date tutorials and verifying my configurations, I finally sussed it.

[code lang=”bash”]root@burns:/home/iain# dbd
dbd: error while loading shared libraries: libdb-4.8.so: cannot open shared object file: No such file or directory[/code]

So I checked to see if I had this anywhere.

[code lang=”bash”]root@burns:/home/iain# locate libdb-4.8[/code]

No results.

[code lang=”bash”]root@burns:/home/iain# locate libdb


That was handy, so I installed it.

[code lang=”bash”]root@burns:/home/iain# dpkg -i /var/cache/apt/archives/libdb4.8_4.8.30-11ubuntu1_amd64.deb
Selecting previously unselected package libdb4.8:amd64.
(Reading database … 196079 files and directories currently installed.)
Unpacking libdb4.8:amd64 (from …/libdb4.8_4.8.30-11ubuntu1_amd64.deb) …
Setting up libdb4.8:amd64 (4.8.30-11ubuntu1) …
Processing triggers for libc-bin …
ldconfig deferred processing now taking place[/code]

And dbd started working.

[code lang=”bash”]root@burns:/home/iain# dbd
Usage: dbd [-e|-t|-v|-x] -d [-i] | -s [-c|-n]| -r [-c|-f] | -u


After deleting .AppleDB for good measure and restarting Netatalk, all was well.

I have no idea why this was missing, or whether it is the correct fix but it seems to work without side effects. If you don’t have the .deb, I guess this would also work:

[code lang=”bash”]apt-get install libdb4.8[/code]

May 29

First and foremost, I want to thank my friend Mat for helping with the design. I couldn’t have done it without you!

As you will recall from part IV, I had a second 4U case used only to house drives. This was the starting point for my NAS server 2, case #2. I would have one case for the host system (and some drives) and a second case just for drives.

If you look at the Backblaze design, you’ll see it has space for a motherboard, 2 PSUs and 46 drives (1 is for the host OS). That’s an awful lot to squeeze into a single case! It works for them as they are racked up in a data centre with deep racks. Something would have to give in my 545mm deep ‘value’ case.

Fourty-five drives is a nice number, as it gives you 3 x 15 drive arrays. I think that’s the most it’s sensible to have in a RAID 6 configuration. Fitting up-to 11 drives in the first case, I needed to cram at least 34 in case #2.

Having verified SATA port multipliers work perfectly well, I was happy to take the same approach as Backblaze with their backplanes. What I decided to think about a bit for carefully than them was the host SATA ports. They put 20 drives on the PCI bus, but I knew from experience this would degrade performance.

Since I already had a 4-port PCIe card, it made sense to use that for 4 port-multipliers. This would give me 20 ports on a PCIe x4 bus – the drives won’t all operate at maximum speed at the same time, but there’s still plenty enough throughput to saturate a gigabit LAN. This puts the SATA port count at 26 so far. Looking for 45 in total, I would need another 19. The only practical way to do this, based on the number of PCIe slots I had was another 4-port card each hosting a 5-port multiplier/backplane. So instead of cramming one standalone port-multiplier and associated drives in the main case, we decided to put them all in case #2.

Spanning power between cases didn’t seem like the best idea, and with 40 drives now needing to be catered for, 2 PSUs had to fit in the second case. Here’s a bird’s-eye and front on view of the layout. Thanks again, Mat, for coming up with it.

The black bars going left-to-right are for reinforcement – 40 hard drives weigh a lot! The other thing to note is the orientation of the drives. Keeping them that way allows efficient air-flow, from front to back.

The original intention for my 1000W PSU was to power the whole system and 22 drives. Since I’m now trying to support up to 45 drives and keep the power in each case separate, the power arrangements needed to be rethought.

I allocated the OCZ PSU to case #2 and re-purposed an old PSU I had for the main system. I calculated the amperage available for each voltage sufficient to supply 5 of the 8 port-multipliers. Maybe it would have been ‘neater’ to split it four and four, but this way, when upgrading, I wouldn’t need such a beefy supply.

Each backplane requires two molex feeds. Here’s a wiring diagram (credit once again to Mat).

Now we know pretty much what we’re building, in the next post I’ll talk about the parts and construction.

preload preload preload