Dec 29

Have you ever looked through your server logs and seen something like this?

[MY-013360] [Server] Plugin mysql_native_password reported: ”mysql_native_password’ is deprecated and will be removed in a future release. Please use caching_sha2_password instead’

This is because, as you can see in the MySQL 8.1.0 release notes, the plugin is now deprecated.

Luckily, the warning also tells you the solution – switch to caching_sha2_password instead. However it doesn’t tell you how to do that.

This needs to be done on a per-user basis, and you also need to know the password of the user. To find out which users to update, run the following SQL:

SELECT user, host, plugin FROM mysql.user;

(Note, you can enter a SQL console on the host machine with something like mysql -u root -p)

This will give you an output something like this:

mysql> SELECT user, host, plugin FROM mysql.user ORDER BY user;
+------------------+--------------+-----------------------+
| user             | host         | plugin                |
+------------------+--------------+-----------------------+
| debian-sys-maint | localhost    | mysql_native_password |
| example_user     | localhost    | mysql_native_password |
| mysql.infoschema | localhost    | caching_sha2_password |
| mysql.session    | localhost    | mysql_native_password |
| mysql.sys        | localhost    | caching_sha2_password |
| mysqld_exporter  | localhost    | mysql_native_password |
| root             | localhost    | mysql_native_password |
+------------------+--------------+-----------------------+

As you can see, some users are configured with the mysql_native_password plugin. All we need to do is alter the users and update them to caching_sha2_password.

Here’s an example:

ALTER USER 'mysqld_exporter'@'localhost' IDENTIFIED WITH caching_sha2_password BY 'StrongPassword';

You might want to FLUSH PRIVILEGES; once you’re done.

Tagged with:
Feb 18

I have been maintaining my own SVN server for many years, but in order to share code and make use of managed services, it was time to migrate some of my repositories to Git.

There are many tutorials for svn-git migration, here’s a customised script which works for me: migrate-svn-to-git.sh. Note, it requires an authors.txt file – there any plenty of adequate resources out there for how to create one.

There is nothing specific about this script for Google Source Repos, that is just my choice for private code so it can make use of Google Cloud Build and gcr.io.

#!/usr/bin/env bash

svnUri="$1"
gitUri="$2"

if [[ -z "${svnUri}" || -z "${gitUri}" ]]
; then
echo "[ERROR] Usage migrate-svn-to-git.sh <svn_uri> <git_uri>" 1>&2
exit 1
fi

if [[
! -f authors.txt ]]; then
echo "[ERROR] authors.txt is missing" 1>&2
exit 1
fi

echo "[INFO] Cloning from SVN: ${svnUri}"
git svn --authors-file=authors.txt clone -s ${svnUri} --prefix svn/ migration
cd migration

echo "[INFO] Creating Git tags"
for
t in $(git for-each-ref --format='%(refname:short)' refs/remotes/svn/tags); do git tag $(echo ${t} | sed 's^svn/tags/^^') ${t} && git branch -D -r ${t}; done
echo "[INFO] Creating Git branches"
for
b in $(git for-each-ref --format='%(refname:short)' refs/remotes/svn | grep -v trunk); do git branch $(echo ${b} | sed 's^svn/^^') refs/remotes/${b} && git branch -D -r ${b}; done

echo "[INFO] Creating .gitignore file"
git svn show-ignore > .gitignore
git add .gitignore
git commit -m "Added .gitignore"

echo "[INFO] Pushing to Git: ${gitUri}"
git remote add google ${gitUri}
git push --force google --tags
git push --force google --all

cd -

echo "[INFO] Removing migration directory"
rm -rf migration

Once you have created the Git repo, you need to switch your working copy of a project from SVN to Git. Here’s a script: switch-svn-to-git.sh

#!/usr/bin/env bash

gitUri="$1"

if [[ -z "${gitUri}" ]]
; then
echo "[ERROR] Usage switch-svn-to-git.sh <git_uri>" 1>&2
exit 1
fi

if [[
! -d .svn ]]; then
echo "[ERROR] Not a SVN project" 1>&2
exit 1
fi

echo "[INFO] Finding current branch"
svnBranch=$(svn info | grep '^URL:' | egrep -o '(tags|branches)/[^/]+|trunk' | egrep -o '[^/]+$')
echo "[DEBUG] ${svnBranch}"

echo "[INFO] Cleaning SVN directory"
rm -rf .svn

echo "[INFO] Initialising Git from: ${gitUri}"
git init
git remote add origin ${gitUri}
git fetch

echo "[INFO] Saving working copy"
git show origin/master:.gitignore > .gitignore
git checkout -b tmp
git add .
git commit -m "local changes"

echo "[INFO] Checking out branch"
gitBranch="master"
if [[ $
{svnBranch} != "trunk" ]]; then
gitBranch="${svnBranch}"
fi
git checkout ${gitBranch}

echo "[INFO] Merging working copy"
git merge --allow-unrelated-histories -X theirs --squash tmp
git branch --delete --force tmp

echo "[INFO] Deleting IntelliJ reference to SVN (if exists)"
test -f .idea/vcs.xml && rm .idea/vcs.xml

echo "[INFO] Done - you may want to set your name / email with: git config user.email <email>"
Dec 25

It’s Christmas time, which is when I get a chance to upgrade my home servers. This year one of them needed a double-upgrade: 14.10 to 15.04 to 15.10.

After backing-up (which is essentially a tar command, more on that in another post), I proceeded with the first upgrade.

Everything seemed to go smoothly but on boot, the system dropped into an emergency mode shell.

At least it showed a couple of errors:

  1. acpi pcc probe failed
  2. error getting authority error initializing authority

After a quick search, I found that the first is actually just a warning and can be ignored (source). If I had more time, I would consider fixing the root cause.

The second one was a little more tricky. The error doesn’t really indicate the real problem but a few people have found it to be caused by an invalid fstab entry.

In the recovery shell, run the following command to find which one (thanks):

[code lang=”bash”]journalctl -xb[/code]

For most people, it’s due to having a UUID specified that no longer exists (either due to reformatting a drive or removing the disk all together). In my case it was because I had several tmpfs mounts which are no longer allowed:

tmpfs           /tmp           tmpfs   defaults,noatime           0 0
tmpfs           /var/lock      tmpfs   defaults,noatime           0 0
tmpfs           /var/run       tmpfs   defaults,noatime           0 0
tmpfs           /var/tmp       tmpfs   defaults,noatime           0 0

 

The error can be alleviated either by adding nofail to the options or just removing / commenting out the mount.

The reason I used a tmpfs (RAM disk) for /tmp and others was when I originally set up this server (and some others) the boot disk was an 8GB CompactFlash card. Temp files would create undesirable wear on the device but I’ve since moved this system to an SSD so that’s not an issue anymore. Hence, I just deleted the entries.

Dec 30

Over the holidays, I rebuild my CCTV server. Rather than trying to reuse the installation from the previous disk, I thought it’d be easier just to install everything fresh. Hence, I followed the instructions on the ZoneMinder Wiki.

Following on from this rebuild, I was tidying up my custom viewers which had hard-coded monitor IDs. Of course the correct way to do this would be via an API. Is there an API in ZoneMinder? Well according to the “latest” docs, it should be included since 1.27.

[code lang=”bash”]curl http://wiggum/zm/api/monitors.json
…404 Not Found…[/code]

So I checked if it was actually there or not:

[code lang=”bash”]iain@wiggum:~$ ls /usr/share/zoneminder/
ajax  cgi-bin  css  db  events  graphics  images  includes  index.php  js  lang  skins  sounds  temp  tools  views[/code]

Nope.

Not sure why it’s not there, you can see it on GitHub, but here’s how I solved it until the package is updated properly. (I’m omitting the use of sudo where needed.)

[code lang=”bash”]cd /usr/src
git clone https://github.com/ZoneMinder/ZoneMinder.git –branch release-1.28 zoneminder-1.28
mkdir zoneminder-1.28/web/api/app/tmp
chown www-data:www-data zoneminder-1.28/web/api/app/tmp[/code]

[code lang=”bash”]vi zoneminder-1.28/web/api/.htaccess[/code]
Add: RewriteBase /zm/api

[code lang=”bash”]vi zoneminder-1.28/web/api/app/webroot/.htaccess[/code]
Add: RewriteBase /zm/api

[code lang=”bash”]cd zoneminder-1.28/web/api/app/Config/
cp core.php.default core.php
cp database.php.default database.php
vi database.php[/code]
Change the database settings (host, login, database, password) to match: /etc/zm/zm.conf

[code lang=”bash”]vi /etc/apache2/conf-available/zoneminder-api.conf[/code]
Add the following:
[code]Alias /zm/api /usr/src/zoneminder-1.28/web/api
<Directory /usr/src/zoneminder-1.28/web/api>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>[/code]

Enable the conf, restart apache and you’re done:
[code lang=”bash”]a2enconf zoneminder-api
apachectl graceful[/code]

Don’t forget to enable mod_rewrite if it isn’t already:
[code lang=”bash”]a2enmod rewrite[/code]

Feb 05

Recently I upgraded one of my servers from Ubutnu Server 11.10 to 12.10 (via 12.04). Unfortunteatley, this broke AFP.

When connecting, I got the error “Something wrong with the volume’s CNID DB“.

I’m pretty sure I’ve had this error before, but the standard fix of  deleting .AppleDB didn’t work.

After reading a few more up-to-date tutorials and verifying my configurations, I finally sussed it.

[code lang=”bash”]root@burns:/home/iain# dbd
dbd: error while loading shared libraries: libdb-4.8.so: cannot open shared object file: No such file or directory[/code]

So I checked to see if I had this anywhere.

[code lang=”bash”]root@burns:/home/iain# locate libdb-4.8[/code]

No results.

[code lang=”bash”]root@burns:/home/iain# locate libdb

/var/cache/apt/archives/libdb4.8_4.8.30-11ubuntu1_amd64.deb
…[/code]

That was handy, so I installed it.

[code lang=”bash”]root@burns:/home/iain# dpkg -i /var/cache/apt/archives/libdb4.8_4.8.30-11ubuntu1_amd64.deb
Selecting previously unselected package libdb4.8:amd64.
(Reading database … 196079 files and directories currently installed.)
Unpacking libdb4.8:amd64 (from …/libdb4.8_4.8.30-11ubuntu1_amd64.deb) …
Setting up libdb4.8:amd64 (4.8.30-11ubuntu1) …
Processing triggers for libc-bin …
ldconfig deferred processing now taking place[/code]

And dbd started working.

[code lang=”bash”]root@burns:/home/iain# dbd
Usage: dbd [-e|-t|-v|-x] -d [-i] | -s [-c|-n]| -r [-c|-f] | -u

…[/code]

After deleting .AppleDB for good measure and restarting Netatalk, all was well.

I have no idea why this was missing, or whether it is the correct fix but it seems to work without side effects. If you don’t have the .deb, I guess this would also work:

[code lang=”bash”]apt-get install libdb4.8[/code]

Feb 14

Having filled up my first RAID volume, I had to add a new mount and symlink the extra directories into the network share point.

This worked absolutely fine for AFP, but when accessing the share in Windows via SMB, the symlinked folders were inaccessible. Fortunately, there is a simple solution.

Just add this to the [global] section of your /etc/samba/smb.conf.

[code lang=”bash”]follow symlinks = yes
wide links = yes
unix extensions = no[/code]

There are apparently security considerations why this is disabled in the first place, but nothing I (think I) need to worry about.

Nov 13

Having run completely out of space, I was all ready to start my next array with 2TB hard drives. Surprisingly, the same day, I read on Engadget that Western Digital had started shipping 3TB drives.

With the cost overhead of housing each drive, and the limit on the number of drives I can accomodate, using the highest capacity drive available for an array is the best option.

Now, I haven’t yet explained how My NAS Server 2 evolved, but to give you a sneak-peak – it uses SATA port-multipliers. This brings me to the point of this post…

The Engadget post regarding these drives seemed to indicate there could be some issues seeing all the space without special drivers. After doing a bit of research, I think there is only a problem when you are trying to boot from the drive. As my NAS boots off a CF card, it’s a non-issue for me. More importantly, I read a conversation on the linux-raid mailing list which seems to indicate there shouldn’t be a problem with Linux, or the port-multipliers supporting these drives right out the box.

This is good news as I will be able to use them to build my next array. Unfortunatly, they are vastly more expensive than 2TB drives, and on top of that, they don’t seem to be available in the UK quite yet. Hopefully I will find a way round my lack-of-space until I can get my hands on a couple.

May 16

After starting the series of posts My NAS Server 2, I thought I should take a few moments to write about the experiences of my first NAS server. It should help justify the decisions I made with second NAS you may otherwise have thought I’ve overlooked. I felt it would be sensible to go down the route of a NAS after I filled up my first 1TB external hard-drive very quickly.

Please bear in mind I wrote this as a collection of thoughts and memories from the last three years – it may not all be coherent.

Type of set-up

Originally, I thought there were two routes I could go down:

  1. Pre-built NAS device.
  2. Build-you-own NAS.

I pretty much instantly ruled out option 1 due to cost. The price of these devices (e.g. Drobo) was extortionate compared to using a low-spec PC. Pre-built devices do have their advantages, e.g. being quiet and stylish, but I am fortunate enough to be able to hide mine away in the loft, so those don’t matter to me. Once I had decided to build a NAS, one of the early decisions was…

Hardware or software RAID

Some of you may be thinking “Hold on, why are you jumping into RAID at all?”. Well I’m afraid I don’t really have a very good answer to that – It just ‘made sense’ to me. If you really want a reason, how about it gives you a single volume?

Before I explain my reasoning, let me just get ‘fake-RAID’ out of the way. Some people think they have free hardware RAID built into their motherboard – this isn’t wholly correct. While RAID 0 and 1 (and maybe even 0+1) can be handled in the motherboard hardware, RAID 5 or 6 never is. It actually uses a software layer running in Windows to handle the parity etc, which eats CPU cycles. That is why it is fake-RAID, and I’m ruling it out.

So now I’ve got two real choices, which is the best? For me, software RAID. Why? Cost. Hardware RAID controllers are very expensive and are limited in the number of drives they support. As far as I’m concerned, hardware RAID is for enterprise where price is a non-issue and performance is everything. This server was for cheap, mass storage.

Another problem with hardware RAID is that if the controller dies, your array goes with it. It may be possible to replace the controller with one of the exact same model and firmware to get the array back up, but there’s no guarantees, and again, it’s very expensive. Plus, by the time the controller breaks, you probably won’t be able to get another one the same, because they’ll have been replaced with newer models by then. Ideally, you’ll buy two from the offset and leave one in the box on a shelf – very expensive.

Software RAID allows for any hardware, and even a complete change of hardware while keeping the array intact. This makes it very flexible and a good choice for me. Now on to the variants…

unRAID

unRAID is a commercial OS that you put on your PC to turn it into a NAS. It’s designed to run off a USB stick, so that saves a SATA port but it has its limitations. When I built my first NAS, I believe there was a maximum of 10 (or so) drives allowed in an array. Now it looks like they have a ‘Basic’ version (free) that supports 3 dives, a ‘Plus’ version ($69) with up to 6 drives, and a ‘Pro’ version ($119) that supports 20 drives. The main feature of this OS is that it provides redundancy, but allows different sized drives. Quite nice for some people, but not what I want or need. Add to that the price and I’m not interested.

FreeNAS

This was actually my chosen option to begin with, but I didn’t stick with it for long. Again, this is a pre-configured OS designed to go on a memory stick. It has the advantage of being free and easy to configure. Unfortunately, I had very little success when using it to create a RAID – it was too unstable and hence unusable. On top of this, I don’t think it even supported RAID 6.

Linux-RAID

Configured with mdadm, this is the option I turned to when FreeNAS failed me. You can use it on any flavour of Linux – I went for Ubuntu Server as it was a popular choice at the time and was well supported. I decided not to use the desktop version not only because, well, it’s a server, but there was no need for the overhead of a GUI that I would never see. It’s also true that most of the configuration isn’t possible through a GUI, so I’d have been firing up the CLI either way.

Linux-RAID is very advanced, stable and flexible. It’s easy to add drives and supports RAID 5, 6 and more. That makes it the perfect choice for me. While it’s designed to be run off a normal hard-drive, it can be run off a USB stick. I chose to use a compact-flash to IDE adaptor as this kept the SATA ports free and solid-state memory tends to be more reliable than mechanical hard drives. I just want to point out at this point, even if the OS did become un-bootable, it doesn’t matter to the array in an way, and it can be re-assembled by any other variant of Linux – even a live-CD.

Windows / Home / Server

No thanks. OK, I’ll elaborate – I don’t trust it, I don’t like it. What, you need a better answer? Fine…

Windows has limited RAID functionality so that kills it right there. It does have a ‘duplication’ utility where it will make a second copy of selected data, but that doubles the space required for where you want redundancy. It’s quite good in that it will manage your data across multiple drives, but I’ve already decided to take a proper RAID approach. Also, you start running into problems if you have large filesystems. You need to ensure you set appropriate cluster sizes etc. Not sure if you can add capacity on-the-fly, either. Think I’ll give that a miss.

ZFS

This wasn’t really an option back when I constructed my first NAS but I’ll talk about it here for the sake of completion.

ZFS is a filesystem designed by Sun and includes the option of redundancy. It’s free to use and runs on OpenSolaris and a few others. Running only on those OSs, the hardware support becomes limited, which is not great when you’re trying to use (unbranded) consumer hardware to keep costs down. But that’s not the only problem with it. ZFS’s redundancy is called ‘RAID-Z’, which to all (our) intents and purposes is the same as RAID 5. There’s also RAID-Z2, which is equivalent to RAID 6. The problem with this, is you can’t expand the capacity once it’s been created. I would probably recommend ZFS if your hardware happens to be compatible with OpenSolaris and you know for sure that you don’t want to change the capacity of your NAS. Something tells me that’s not a common starting point, which is a real shame. Note: expanding the capacity of a RAID-Z pool is a feature expected to be added to ZFS in the future, so it won’t be useless forever.

Hardware

With the point of the server to be cheap, I ordered two basic PCs off eBay for <£100 each, rather than specing out a new PC. On top of that, I got a 128MB USB key (as I intended to use FreeNAS) and a Gigabit LAN card. After low success with that hardware, I ended up buying  a used Motherboard + CPU (3GHz P4) + RAM from a friend for £90. It had built-in Gb LAN, 5 PCI card slots and 4 on-board SATA ports. Using 5 x 4-port SATA cards, I gained a total of 24 SATA ports.

This wasn’t a particularly bad set-up. It gave reasonable speeds of 40MB/s up & down over the network. Bearing in mind the Ethernet port was on the same bus as the SATA PCI cards (133MB/s theoretical maximum), it’s pretty much as high as you could expect.

As for casing and powering 24 drives – that was a little tricky. Casing turned out to be pretty hard – there aren’t really any hard drive racks with a reasonable price-tag. In the end I decided to build my own. All it consisted of was two sheets of aluminium with the drives screwed between them and a few fans attached. Here’s a picture:

Hard Drive Rack

DIY hard-drive rack.

For powering the drives, I could have tried one humungous supply, but firstly they’re very expensive and secondly, I’m not sure you can even get one that will do 24 drives. Powering on 24 hard drives at the same time can use up to 70A of 12-volt current and a huge amount of sustained 5V. Ideally you’d use staggered spin-up, but this is only a feature of high-end SATA cards and doesn’t work with all drives anyway. I went for powering 4 hard drives off each PSU which meant I needed 6 in total (plus one for the system). At only ~£5 each, it was still a really cheap option. Check this post for how to power-on a PSU without connecting it to a full system.

The full list with prices can be found in this Google Spreadsheet.

Filesystem

Seeing as how I was using Linux, the obvious (and my) choice of filesystem was ext3. I didn’t really think too hard about it. Ext3 is a mature and stable filesystem which works very well under Linux. One of the most annoying things about that filesystem is that deleting a large number of files takes a (very) long time. It also has a maximum volume size of 16TB, not that that was a problem for this NAS. Another issue I found out later is that it reserves 5% of the space for system usage. When it’s not the boot volume that’s completely unnecessary, but there is a way to disable it when you create the filesystem (so long as you remember to).

When I switched from RAID 5 to RAID 6 (see timeline below), I took the opportunity to change filesystems. I looked at few recommendations for Linux including JFS, which is fast and stable but ultimately went with XFS. XFS has the reputation of being fast for large files which is appropriate for this NAS, as well as being very stable. The maximum volume size is 16 exabytes (on a 64-bit system) and you can grow the filesystem when you need to (although you can’t shrink it – not that I can see why you’d want to). It certainly sped up file deletion and I’ve not had any problems with it. A good choice for me.

A new option on the table is ext4. This wasn’t available when I built this NAS as it’s just recently been added to the kernel and considered stable. I have since used it in another server and it seems fine (it has the same space reservation issue as ext3) but I’m still not sure that it’s stable enough to trust with my data.

Around the corner is Btrfs. This is what could be described as Linux’s answer to ZFS. While they are both free to use, ZFS has licensing issues that makes it impractical to run on Linux (has to run in user-space). Btrfs is a while a way from being suitable for production use, but it’s one to keep your eye on.

Timeline

I set up the array with 8 x 500GB hard drives in RAID 5 – this gave me 3.5TB of usable space. As I wanted to switch to RAID 6 (due to the number of drives in a single array) I had to add another 8 drives at the same time when I first wanted to expand capacity. I created the RAID 6, copied the data from the RAID 5, dismantled the RAID 5 and added the drives to the new array.

Expanding the array wasn’t as easy as I had first imagined. Growing a RAID 6 array was quite a new feature at the time, and was not available in any kernel pre-compiled and packaged for Ubuntu Server. This meant compiling my own array which was an interesting endeavour. It worked out flawlessly in the end, but that’s getting a little off-topic. Anyone trying to grow a RAID 6 now shouldn’t have that problem as it’s been a standard feature of Linux kernels for a couple of years now. On top of that, it’s also now possible to convert a RAID 5 to RAID 6. This is a recently added feature so may be a little risky to try on important data. It also uses a non-standard RAID 6 layout, with the second set of parity all being on one (the new) disk.

Once that capacity was used up, I added two sets of 4 disks and grew the array each time. This took me to a total of 24 x 500GB disks in RAID 6 – 11TB of usable space. All was well – here’s a picture of what it looked like:

NAS Server 1

My first NAS in its final state.

This is where I started to run into problems. Disks started to fail, which in it’s own right isn’t a problem, even two can fail so long as you can get them replaced (and re-built) before any others fail. But eventually I was in a situation where the array was degraded and could not assemble its self. I did manage to force the array back online after dding one of the failing disks to a new drive, but only long enough to get as much data off as I could fit on my new NAS. The disk I used as a temporary fix was actually out of the new NAS and when I replaced it with a 500GB one again, the rebuild failed. Instead of putting the new drive back, I decided to force the array back up by creating a new array with the components from the old array. I’ve probably lost some people – you’ll need some extensive experience with Linux-RAID to know what I’m on about. As this is a tricky and risky procedure, it didn’t work. The data was gone and I learned from my mistakes. The biggest problem was that I had no head-room in terms of connecting spare disks – all 24 SATA ports were used up with the single array. If I had put more time, effort and money in, I could have saved it. The hardware didn’t fail beyond all repair, I just didn’t choose the safest way to recover. 24 drives in one array is risky, will most likely provide you with headaches, but is certainly obtainable.

That marks the end of my first NAS, which brings us to My NAS Server 2.

Mar 23

Here’s how to install AFP (Netatalk) on Ubuntu Linux with SSL for encrypted logins.

[code lang=”bash”]apt-get install cracklib2-dev libssl-dev devscripts
apt-get source netatalk
apt-get build-dep netatalk
cd netatalk-*
DEB_BUILD_OPTIONS=ssl sudo dpkg-buildpackage -us -uc
dpkg -i ../netatalk_*.deb[/code]

Now you should prevent it from being upgraded with either:

[code lang=”bash”]echo “netatalk hold” | sudo dpkg –set-selections[/code]

or:

[code lang=”bash”]apt-get install wajig
wajig hold netatalk[/code]

Don’t forget to edit your shared directories:

[code lang=”bash”]nano /etc/netatalk/AppleVolumes.default[/code]

You might also want to advertise the server with Bonjour. I followed these instructions.

preload preload preload