Jan 03

As mentioned in my pervious post, I recently got an M1 Mac mini with an external NVMe disk for additional storage. In that post, I said Thunderbolt (as opposed to USB 3.1) enclosures are prohibitively expensive. While they are certainly more expensive, there are some out there at a reasonable price.

I got my hands on one (with a JHL6340 chip) and did a speed test to compare it with USB.

Blackmagic – ASM2362 USB 3.1

Write: 825.8MB/s, Read: 819.9MB/s

Blackmagic – JHL6340 Thunderbolt

Write: 851.6MB/s, Read: 1479.5MB/s

Amorphous – ASM2362 USB 3.1

Write: 850.64MB/s, Read: 896.05MB/s

Amorphous – JHL6340 Thunderbolt

Write: 1362.88MB/s, Read: 1575.47MB/s

A note on USB with Mac mini

As described by Apple in the Mac mini specs page, the port you use is important (sorry!).

Two Thunderbolt / USB 4 ports with support for:

  • DisplayPort
  • Thunderbolt 3 (up to 40Gb/s)
  • USB 3.1 Gen 2 (up to 10Gb/s)
  • Thunderbolt 2, HDMI, DVI and VGA supported using adapters (sold separately)

Two USB-A ports (up to 5Gb/s)
HDMI 2.0 port
Gigabit Ethernet port
3.5mm headphone jack

The USB-A ports only support up to 5Gb/s which means whether you are using a USB 3.1 enclosure or a Thunderbolt enclosure you need to connect it to one of the “Thunderbolt / USB 4” ports to achieve 10-40Gb/s. I did verify with a speed test, and you can only achieve circa 500MB/s with a USB-A port (and in fact macOS tells you the connection speed in System Information).

Tagged with:
Dec 23

I recently purchased a new Mac mini to replace my home computer – a 2009 iMac. It was well overdue as Apple listed it as obsolete 4 years ago.

Unfortunately, it only comes with 256GB SSD storage as standard and Apple charges £200 extra for 512GB and £400 extra for 1TB. Very pricey! So in an attempt to save some money, I thought I would use an external drive for additional space.

It wasn’t long before I settled on the idea of getting an NVMe SSD with an enclosure. This was recommended by The Verge and seemed to offer the best price point and flexibility.

For the SSD itself, I went through several sites such as Tom’s Hardware looking for a recommendation and eventually settled on the ADATA XPG SX8200 Pro. This offered a good price/performance ratio (at least in the UK).

Now came the more tricky part – which enclose to use. Originally, I was expecting to get a Thunderbolt enclosure, but it turns out they are prohibitively expensive – especially for my use-case. It didn’t make sense to spend more on the caddy than the SSD!

After trawling Amazon for a while I picked up a FIDECO M.2 NVME External SSD Enclosure. It looked very promising – 4.3 out of 5 starts with 782 ratings.

However, as soon as it arrived – the fun started. While I did manage to connect and initialise the disk, immediately after starting a file transfer it detached itself from the OS. I tried disconnecting and reconnecting it, but it showed up uninitialised as a 97.86TB disk.

I thought I’d try formatting it again, but I ended up with the following error:

Unable to write to the last block of the device. : (-69760)

This makes sense, given how big it thought the disk is (over 90TB), and how big it actually is (1TB).

At this point, I started researching enclosures and looking around to if others had similar problems. It turns out NVMe enclosures are notoriously flakey and there are many forums threads discussing the topic.

What I found is, for all the off-brand enclosures, there are three actual chips that power them:

  1. JMS583
  2. RTL9210(B)
  3. ASM2362

Without knowing ahead of time, it seems the enclosure I first bought contained a RTL9210B. This does not have a lot of complaints, but it certainly didn’t work for me. I even tried upgrading the firmware to no avail.

So with that in mind, I scoured Amazon for a new enclosure with a different chipset. Specifically an ASM2362 as the JMicron chip has lots of negative comments.

A few days later (no, I don’t have Prime) my Kafuty USB3.1 to M.2 NVME External Hard Drive Enclosure arrived and I swapped over the SSD.

It immediately showed up in Finder but I thought I’d reinitialise it anyway. Some file transfers and speed-tests later, all was still good ?

TLDR; On my M1 Mac mini, RTL9210B was a complete failure and ASM2362 is stable.

Tagged with:
Jan 05

To briefly re-cap my last post, I recently upgraded my home cinema speakers to the new MK Sound M-Series:

  • 3 x M&K Sound M70 – Left/Centre/Right
  • 1 x M&K Sound M40T – Surround Pair
  • 1 x M&K Sound V10 – Subwoofer

So as promised, here is my “review” 1 of the system.

Unboxing

M&K speakers are packaged very well in polystyrene. There is nothing much more to say about the unboxing, other than what’s inside the boxes.

M&K Speakers Boxes

The last thing I expected to find when opening a speaker box was a pair of gloves. That’s right, M&K speakers all come with a pair of white butlers gloves! This is presumably to encourage handling them with care and ensuring you do not get any fingerprints on the perfect finish. I would say to take extra care if you do use the gloves, as the speakers are very slippery in your hand with them on.

Aesthetics

These speakers are absolutely brilliantly made. They are completely solid with no seams or joins in the cabinet, complete with rounded edges and corners. It feels akin to the unibody MacBook Pro.

When I say they completely smooth – they are. There aren’t even mounting holes for the speaker grille, but guess what? You don’t need any! Bring the grille into position and it snaps into place using magnets. Very Apple-esque.

Note, this is only true of the M-Series so perhaps could be a new feature. The V10 sub still has mounting holes for the grille.

Mounting

The M-Series is designed to be wall-mounted and, as can be seen from the manufacturer’s pictures, there are keyhole fixings on the rear for vertical or horizontal positioning. Each came with a set of screws and rawlplugs, however I chose to use my own, heavier-duty ones to ensure a safe fixing in plasterboard.

Mounting the surrounds on the ceiling was a little more tricky as, again, they’re designed to be wall-mounted. In the end, I found an appropriate set of little speaker mounts for a great price.

Sanus tilt and swivel universal speaker mount white (available at Screwfix and Richer Sounds). Note, they are sold as a pair – you don’t need to buy two!

I had to drill holes in them to align with the holes in the M40T, which was only possible on the diagonal. Still, very straightforward and then just popped in some longer M4 screws.

Ceiling Speaker Bracket

Note, I only left the keyhole fixing in to act as a spacer because the screws I had lying around with the correct thread were too long. I don’t mind – it’s a good way not to lose them for the event I want to mount them on a wall one day!

Setup

The M-Series doesn’t have anything to set-up – there are no controls on them. The V-10 subwoofer on the other-hand, has a 16 page manual (available as a download from the MK Sound website).

To summarise, set the crossover to bypass, phase to zero and volume to 12 o’clock. One other note on the V-10 is that is didn’t come with a UK power-cord (just US and EU). This didn’t bother me as I had already pre-wired an extension for the sub-location, but who doesn’t have a spare IEC lead lying around, anyway?

Then it was a simple case of (re-)auto-calibrating my AV receiver. This is a straightforward process of measuring the sound with a provided microphone from the listening position (and several around it).

The other adjustments I made were:

  • Set the subwoofer mode to LFE + main
    The low range signal of all channels is added to the LFE signal output from the subwoofer
  • Increase and unify the crossover to 110Hz
    (was calibrated at 90-100Hz)
  • Increase the LPF to 120Hz
    Set LFE signal playback range. Changes the playback frequency (low pass filter point) of the subwoofer.
  • Turn the sub volume control up slightly so the base sounded more in-balance.

Sound

It didn’t take long to be blown-away with the sound quality. As soon as you plug in these speakers, they just work. No faffing.

The speakers are incredibly clear and immersive. With my pervious setup, I was using options like “Dolby Surround Upmix”, “Center Spread” and increased “Dialog Level”. None of these are needed (nor improve) the MK loudspeakers.

Across the front, the sound is incredibly “complete”. There are no missing areas and there is a very smooth transition between locations. You can hear every detail without increasing the volume which (perhaps negatively) shows up the imperfections in recordings and bad mixes.

Speaking of smooth transitions, the subwoofer blends seamlessly into the rest of the sound. There is no “this is where the satellites end and this is where the subwoofer starts”. With my previous system, it was very obvious when the sub was turned off, not only because the satellites were weaker, but because the base was very isolated. Complete opposite with the M&K package. The base is powerful when called for (you can really feel it) and supports the satellites for great depth the rest of the time.

As for the rears, the tripole M40Ts fit into a 5.1 system very nicely. They make you feel as though you’re “inside” the sound, without drawing attention to where the speakers are. When there are direct sound effects at the back – you notice, other than that – they just enhance immersion.

Overall

From the unboxing and installation, through to the listening experience and looking at them every day, I am completely satisfied with my M-Series 5.1 package choice. I was looking for an accurate sound that fills the room for a respectable budget. All of my expectations have been met and some surpassed.

For what I got, and what I paid, I couldn’t ask for more. Having said that, after using the system for a month I am perhaps more aware of the limitations of a 5.1 system. While the front is very “full” and the surrounds create immersion, the transition between them and the elevated position of the rears highlight a place for additional speakers (think 7.1, Atmos, etc).

Given this set-up is in my open-plan living space and I have a distinct lack of lossless and multi-channel (> 6) sources to start with, I shall restraint from going further down that path for the moment.

However, one thing I must do is upgrade my AVR to do the M&K speakers justice. Not only for cleaner amplification and power, but I’m sure I would benefit from advanced room correction. One for later in the year when I have the time and budget…

1 “review” I use quotation marks because I am not really qualified to review speakers – I’m not an expert, I do not have the correct room environment / AVR / measuring equipment to be scientific, and I have not listened to anything comparable recently. The speakers were bought entirely on recommendation and I most likely have a form of post-purchase confirmation bias.

Tagged with:
Dec 21

After moving into my new home, I found the 5.1 speaker package I’ve had since uni wasn’t up to the task of filling the larger living room. This came as no surprise, and I’d been looking for an excuse to upgrade for quite a while anyway.

The hunt was now on to find something appropriate for my living space which also didn’t cost the earth. Given the size of my TV (65″) and TV unit, I had a bit of a restriction as to what would fit. You can see what I mean in the picture below.

Living Room TV

Floor-standers were out of the question, and crucially the height for the centre-speaker was also limited (to around ~13cm). Left and right channels had ~17cm of space which is fair enough.

Not being an avid follower of the home cinema scene, I headed over to AVForums for advice. They have a dedicated sub-forum “What Speakers Should I Buy?” where I duly posted my room restrictions and budget.

The shortlist came down these, with a strong recommendation for the MK sound-bar.

  • Kef T-Series
  • M&K Sound Bar
  • Monitor Audio Apex

However, while browsing the MK Sound website, I noticed some on-wall speakers that also seemed appropriate (similar to the Kef T-Series). After enquiring it turned out they were not recommended as the were discontinued and being replaced by new models which weren’t available yet.

I did eventually get details and prices on the new M-Series range, and given the sizes worked out perfectly, and they were just about in-budget – I picked out the following package:

  • 3 x M&K Sound M70 – Left/Centre/Right (black)
  • 1 x M&K Sound M40T – Surround Pair (white)
  • 1 x M&K Sound V10 – Subwoofer (black)

Miller & Kreisel are well-known for professional, accurate speakers and in fact have become the reference standard in the finest music and film studios. I was told they have stunning sound for the price-point and given they’re not cheap, my expectations were well up-there.

I went for white surrounds as I had been researching best speaker placement and decided to ceiling-mount them. This was not only a “better” position, but also greatly aided tidy wire-management as I could run them inside the ceiling rather than along the floor. Here is an excerpt from the M&K Sound Satellite Operation Manual:

The surround speakers should be located relatively close to the ceiling. Placement above the listeners’ heads is important, preferably with the cabinet’s bottom at least two feet (60 cm) above a seated listener’s head.

Being a new model, it took a good three weeks to get my hands on them after making the decision and I can thank Rich at SeriouslyCinema for getting them to me as soon as possible.

Read my review of these speakers in my next post.

Tagged with:
Feb 18

I have been maintaining my own SVN server for many years, but in order to share code and make use of managed services, it was time to migrate some of my repositories to Git.

There are many tutorials for svn-git migration, here’s a customised script which works for me: migrate-svn-to-git.sh. Note, it requires an authors.txt file – there any plenty of adequate resources out there for how to create one.

There is nothing specific about this script for Google Source Repos, that is just my choice for private code so it can make use of Google Cloud Build and gcr.io.

#!/usr/bin/env bash

svnUri="$1"
gitUri="$2"

if [[ -z "${svnUri}" || -z "${gitUri}" ]]
; then
echo "[ERROR] Usage migrate-svn-to-git.sh <svn_uri> <git_uri>" 1>&2
exit 1
fi

if [[
! -f authors.txt ]]; then
echo "[ERROR] authors.txt is missing" 1>&2
exit 1
fi

echo "[INFO] Cloning from SVN: ${svnUri}"
git svn --authors-file=authors.txt clone -s ${svnUri} --prefix svn/ migration
cd migration

echo "[INFO] Creating Git tags"
for
t in $(git for-each-ref --format='%(refname:short)' refs/remotes/svn/tags); do git tag $(echo ${t} | sed 's^svn/tags/^^') ${t} && git branch -D -r ${t}; done
echo "[INFO] Creating Git branches"
for
b in $(git for-each-ref --format='%(refname:short)' refs/remotes/svn | grep -v trunk); do git branch $(echo ${b} | sed 's^svn/^^') refs/remotes/${b} && git branch -D -r ${b}; done

echo "[INFO] Creating .gitignore file"
git svn show-ignore > .gitignore
git add .gitignore
git commit -m "Added .gitignore"

echo "[INFO] Pushing to Git: ${gitUri}"
git remote add google ${gitUri}
git push --force google --tags
git push --force google --all

cd -

echo "[INFO] Removing migration directory"
rm -rf migration

Once you have created the Git repo, you need to switch your working copy of a project from SVN to Git. Here’s a script: switch-svn-to-git.sh

#!/usr/bin/env bash

gitUri="$1"

if [[ -z "${gitUri}" ]]
; then
echo "[ERROR] Usage switch-svn-to-git.sh <git_uri>" 1>&2
exit 1
fi

if [[
! -d .svn ]]; then
echo "[ERROR] Not a SVN project" 1>&2
exit 1
fi

echo "[INFO] Finding current branch"
svnBranch=$(svn info | grep '^URL:' | egrep -o '(tags|branches)/[^/]+|trunk' | egrep -o '[^/]+$')
echo "[DEBUG] ${svnBranch}"

echo "[INFO] Cleaning SVN directory"
rm -rf .svn

echo "[INFO] Initialising Git from: ${gitUri}"
git init
git remote add origin ${gitUri}
git fetch

echo "[INFO] Saving working copy"
git show origin/master:.gitignore > .gitignore
git checkout -b tmp
git add .
git commit -m "local changes"

echo "[INFO] Checking out branch"
gitBranch="master"
if [[ $
{svnBranch} != "trunk" ]]; then
gitBranch="${svnBranch}"
fi
git checkout ${gitBranch}

echo "[INFO] Merging working copy"
git merge --allow-unrelated-histories -X theirs --squash tmp
git branch --delete --force tmp

echo "[INFO] Deleting IntelliJ reference to SVN (if exists)"
test -f .idea/vcs.xml && rm .idea/vcs.xml

echo "[INFO] Done - you may want to set your name / email with: git config user.email <email>"
Mar 25

Having upgraded to a UHD TV last year, it was time to get a new AV receiver to match. It’s been a long time since I was in the AVR market and a lot has changed.

After doing the research at the end of last year, I thought I’d share the take-aways in this how-to guide.

Since I got my first AVR, two sets of HD codecs became commonly available. The first set, mainly used on Blu-ray are:

  • DTS-HD Master Audio
  • Dolby TrueHD

These are very well supported now so you don’t need to worry whether an AVR will be able to decode them.

The more recent set, most likely to be found on 4K Blu-rays consist of:

  • DTS:X
  • Dolby Atmos

Dolby Atmos is actually fairly commonly available, e.g. BT Sport, Sky Q (Sport and Cinema) and Netflix, so it’s important to ensure your AVR will support these latest codecs if you want any kind of future-proofing.

They are now fairly well established so it’s not too difficult to find support.

The last thing to mention about audio is eARC. This is the Enhanced Audio Return Channel finalized in the 2.1 HDMI spec. It allows the newer high definition codes, such as Dolby Atmos and DTS:X to be passed back from the TV to your AVR. If you really want to future-proof, consider this feature. Personally, I don’t find it necessary – all my equipment is attached to the TV through the AVR so there is little reason to send the sound back. The obvious exception to this is when you have a smart TV that you use for streaming Netflix or other services. Since regular ARC supports the first generation high definition codecs, just not the new “3D” ones, I’m happy to live with that – especially since I don’t have a TV that supports it!

ARC is important, but common enough that you don’t need to look out for it.

Now for video – there are basically two things you need to look out for.

Firstly, HDMI 4k Ultra HD 60Hz with HDCP 2.2 compatibility. This is essential but not too difficult to find on recent models.

Secondly, and a little more obscure is HLG. Standing for Hybrid Log Gamma, this is the HDR format conceived by the BBC and NHK which will most likely be used by broadcasters in the UK (BBC / iPlayer, Sky Q). There is little content right now, but if you want to see HDR content in the future, you’d better make sure your AVR has HDR pass-through (HDR10, Dolby Vision and HLG).

I ended up choosing the Denon AVR-X1400H. This was a good combination of minimum requirements, price and number of HDMI inputs.

You can find my comparison / shortlist here, though the pricing and models will start to get out of date quite quickly.

Dec 03

While migrating one of my hobby projects from the PHP mysql extension to PDO, I came across this error:

PHP Fatal error:  Uncaught exception 'PDOException' with message 'SQLSTATE[HY000]: General error: 2014 Cannot execute queries while other unbuffered queries are active.  Consider using PDOStatement::fetchAll().  Alternatively, if your code is only ever going to run against mysql, you may enable query buffering by setting the PDO::MYSQL_ATTR_USE_BUFFERED_QUERY attribute.'

A quick search on the web suggested this happens when you don’t fetch all rows from a query. I knew this wasn’t the case and didn’t want to just enable the buffered query attribute as I felt something else was wrong.

Turns out this problem came about as I was trying to migrate my MySQL connection properties, previously defined with:

[code lang=”php”]define(‘UTC_OFFSET’, date(‘P’));
mysql_query(“SET time_zone='” . UTC_OFFSET . “‘;”);
mysql_query(“SET NAMES ‘utf8’ COLLATE ‘utf8_general_ci;'”);[/code]

The natural change was to add these two statements to PDO::MYSQL_ATTR_INIT_COMMAND (separated by a semicolon). However, that’s where the problem is. The SET command allows both to be specified at once, hence the right way of doing it is:

[code lang=”php”]PDO::MYSQL_ATTR_INIT_COMMAND => “SET NAMES ‘utf8’ COLLATE ‘utf8_general_ci’, time_zone = ‘” . UTC_OFFSET . “‘”[/code]

Credit: Stack Overflow

Bonus: Setting the timezone with a UTC offset allows you to use a zone that PHP knows about, but the MySQL server doesn’t. That way it can be set with the ini setting date.timezone or date_default_timezone_set and doesn’t need to be modified in two places if it needs to be changed.

Oct 02

After upgrading to FreeBSD 9.3, my custom postfix installation was overwritten with postfix-current-3.2.20160925,4. This caused the following entry in my maillog:


Oct 2 19:55:04 myhostname postfix/master[1481]: warning: process /usr/local/libexec/postfix/smtp pid 90855 exit status 1
Oct 2 19:55:04 myhostname postfix/master[1481]: warning: /usr/local/libexec/postfix/smtp: bad command startup -- throttling
Oct 2 19:56:04 myhostname postfix/smtp[90864]: warning: unsupported SASL client implementation: cyrus
Oct 2 19:56:04 myhostname postfix/smtp[90864]: fatal: SASL library initialization

Luckily it was a quick fix as the previous bug has been fixed.

$ sudo pkg install postfix-current-sasl

This will uninstall postfix-current and install a version with sasl support which can be verified as follows:

$ postconf -a
cyrus
dovecot

Jan 26

Ever tried sending mail from your own server and ended up on the Spamhaus Policy Block List (PBL)? I have, a couple of times.

From my FreeBSD sever, I get daily emails; “daily run output” and “security run output”. I’m not particularly interested in these, but it’s important that I get SMART notifications.

What I noticed is that whenever my IP address changes (every couple of months) the emails start coming through again. If you look at the mail log (/var/log/maillog), you’ll see something like
Dec 24 03:14:24 myhostname postfix/smtp[6740]: 9B8A5CB111C: to=<user@example.com>, orig_to=<user>, relay=smtp.examplehost.net[x.x.x.x]:25, delay=43, delays=0/0.22/21/21, dsn=5.0.0, status=bounced (host smtp.examplehost.net[x.x.x.x] said: 550-"JunkMail rejected - me.exampleisp.com (myhostname.mydomain.net) 550-[x.x.x.x]:63909 is in an RBL, see 550 https://www.spamhaus.org/query/ip/x.x.x.x" (in reply to RCPT TO command))

So nicely, they’ve given a link to lookup why you’re blocked. In my case it’s due to using an unauthenticated mail port (25). Since I had this problem before, I though I had setup authentication but in the FAQ it’s very clear that you cannot get this error if you have authentication enabled.

So what else does does it say in the mail log? Well, just before that line:
Dec 24 03:13:54 myhostname postfix/smtp[6753]: warning: smtp_sasl_auth_enable is true, but SASL support is not compiled in

This is a problem with the Postfix package for FreeBSD. I can’t find any other way around it other than to compile Postfix yourself from Ports. Here’s a link to the (unresolved) request in FreeBSD Bugzilla to add a package with it compiled in. So if you’re looking for a FreeBSD package with SMTP SASL authentication – give up (unless that bug has been resolved since the time of posting).

I’m not going to go into details of how to set up the SMTP + SASL on FreeBSD, there are already many guides that do that. I found these helpful:

One other small roadblock I ran into was trying to use port 465. When checking my web host, they specify that port of authenticated SSL, but I found the following error in the logs:
CLIENT wrappermode (port smtps/465) is unimplemented
instead, send to (port submission/587) with STARTTLS
status=deferred (lost connection with smtp.examplehost.net[x.x.x.x] while receiving the initial server greeting

My host didn’t give any indication that it supported SMTP on port 587 and if you search for this problem on the web you’ll find many people trying to solve it with stunnel. A quick telnet to my host on 587 showed it was open, so I just gave it a try. Lo and behold Postfix authenticated and sent my mail. So give port 587 a try even if you’re recommended to use 465 for your relayhost provider.

Dec 25

It’s Christmas time, which is when I get a chance to upgrade my home servers. This year one of them needed a double-upgrade: 14.10 to 15.04 to 15.10.

After backing-up (which is essentially a tar command, more on that in another post), I proceeded with the first upgrade.

Everything seemed to go smoothly but on boot, the system dropped into an emergency mode shell.

At least it showed a couple of errors:

  1. acpi pcc probe failed
  2. error getting authority error initializing authority

After a quick search, I found that the first is actually just a warning and can be ignored (source). If I had more time, I would consider fixing the root cause.

The second one was a little more tricky. The error doesn’t really indicate the real problem but a few people have found it to be caused by an invalid fstab entry.

In the recovery shell, run the following command to find which one (thanks):

[code lang=”bash”]journalctl -xb[/code]

For most people, it’s due to having a UUID specified that no longer exists (either due to reformatting a drive or removing the disk all together). In my case it was because I had several tmpfs mounts which are no longer allowed:

tmpfs           /tmp           tmpfs   defaults,noatime           0 0
tmpfs           /var/lock      tmpfs   defaults,noatime           0 0
tmpfs           /var/run       tmpfs   defaults,noatime           0 0
tmpfs           /var/tmp       tmpfs   defaults,noatime           0 0

 

The error can be alleviated either by adding nofail to the options or just removing / commenting out the mount.

The reason I used a tmpfs (RAM disk) for /tmp and others was when I originally set up this server (and some others) the boot disk was an 8GB CompactFlash card. Temp files would create undesirable wear on the device but I’ve since moved this system to an SSD so that’s not an issue anymore. Hence, I just deleted the entries.

preload preload preload