Author Archives: Tim

One thing that’s worse in Windows 10 Fall Creators Update: uncontrollable application auto-start

One thing I’ve noticed in Windows 10 recently is that Outlook seems to auto-start, which it never did before. In fact, this caused an error on a new desktop PC that I’m setting up, as follows:

1. Outlook has an archive PST open, which is on a drive that is connected over iSCSI

2. On reboot, Outlook auto-started and threw an error because it could not find the drive

3. In the background, the iSCSI drive reconnected, which means Outlook could have found the drive if it had waited

All very annoying. Of course I looked for the reason why Outlook was autostarting. In Windows 10, you can control startup applications in Task Manager. But Outlook was not listed there. Nor could I find any setting or reason why it was auto-starting.

Eventually I tracked it down. It is not really Outlook auto-starting. It is a new feature in Windows 10 Fall Creators Update that automatically restarts applications that were running when Windows was last shutdown. Since Outlook is pretty much always running for me, the end result is that Outlook auto-starts, with the bad result above.

I presumed that this was a setting somewhere, but if it is, I cannot find it. This thread confirms the bad news (quote is from Jason, a Microsoft support engineer):

This is actually a change in the core functionality of Windows in this development cycle.

Old behavior:
- When you shut down your PC, all apps are closed

- After reboot/restart, you have to re-open any app you’d like to use

New behavior:

- When shutting down your PC, any open apps are “bookmarked” (for lack of a better word)

- After reboot/restart, these apps will re-open automatically

If you want to start with no apps open (other than those set to auto-start via Task Manager/Start), you’ll need to ensure all apps are closed before shutting down or restarting the PC.

Why?

The desire is to create a seamless experience wherein, if you have to reboot a PC, you can pick back up quickly from where you left off and resume being productive.  This has far-ranging impacts across the OS (in a good way).

Not everyone agrees that this “far-reaching impact” is a good thing. The biggest gripe is that there is no setting to disable this behaviour if it causes problems, as in my case. Various entries in the official Windows feedback hub have been quick to attract support.

Workarounds? There are various suggestions. One is to manually close all running applications before your restart. That is an effort. Another is to use a shortcut to shutdown or restart, instead of the Start menu option. If you run:

shutdown /f /s /t 0

you get a clean shutdown; or

shutdown /f /r /t 0

for a restart.

As for why this behaviour was introduced without any means of controlling it, that is a mystery.

A quick look at Surface Book 2: powerful but heavy

Microsoft’s Surface range is now extensive. There is the Surface Pro (tablet with keyboard cover), the Surface Laptop (laptop with thin keyboard), and the Surface Book (detachable tablet). And the Surface Studio, an all-in-one desktop. Just announced, and on display here at Microsoft’s Future Decoded event in London, is Surface Book 2.

image

The device feels very solid and the one I saw has an impressive spec: an 8th Gen Intel Core i7 with 16GB RAM and NVIDIA GeForce GTX 1050 discrete GPU. And up to 17 hours battery life.

All good stuff; but I have a couple of reservations. One is the weight; “from 3.38 lbs (1.534 Kg) ”, according to the spec. By contrast, the Surface Laptop starts at 1.69 lbs (0.767 Kg).

That makes the Book 2 heavy in today’s terms. I am used to ultrabook-style laptops now.

Of course you can lighten your load by just using the tablet. Will you though? I rarely see Windows convertible or detachable devices used other than like laptops, with the keyboard attached. The Surface is more likely to be used like a tablet, since you can simply fold the keyboard cover back, but with the Book you either leave the keyboard at home, and put up with short battery life, or have it at least in your bag.

Nokia 8: a phone from the new Nokia brand that you might actually want

This morning I attended Nokia’s press breakfast here in Berlin, where the main product on show is the Nokia 8 smartphone. It is not quite a new launch – there was an event in London a couple of weeks ago – but it was my first look at HMD’s first flagship device.

image

HMD Global Oy was founded in May 2016 as a new company to exploit the Nokia smartphone brand. The company is “owned by Smart Connect LP, a private equity fund managed by Jean-Francois Baril, a former Nokia executive, as well as by HMD management,” according to the press release at the time. Based in Finland, the new company acquired the right to use the Nokia trademark on smartphones as well as “design rights relating to Microsoft’s Feature Phone Business” (what feature phone business, you may ask).

HMD made the decision to market a pure Google form of Android. I find it intriguing that a Nokia-branded smartphone was once powered by Symbian, then became a Windows device, and now has Google deeply embedded. The two companies are now “joined at the hip,” according to an HMD spokesperson this morning. Though it is a rather unequal relationship, with HMD having fewer than 500 employees and relying on outsourcing for much of its business.

A UK release of the Nokia 8, together with operator deals, will be announced on September 6th, I was told. The unsubsidised price might be around £600 (or Euros, the currencies being of nearly equal value in these Brexit days).

So why might you want one? Well, it is a decent phone, based on an 8-core Qualcomm Snapdragon 835 chipset, 2560 x 1440 display, 4GB RAM, 64GB storage, up to 256GB MicroSD, fingerprint reader and so on.

There are a couple of special features. The most obvious is that both front and rear 13MP cameras can be used simultaneously, enabling what Nokia inevitably calls “bothies”.

image

Is this a feature worth having? It is problematic, partly because taking good selfies is difficult without a selfie stick which most of the time you do not have with you, and partly because the view behind you is typically less interesting than the view you are trying to photograph.

I am not sure whether this matters though. It is a distinctive feature, and in a crowded market this is important.

I am more interested in another feature, called OZO audio. OZO is a professional cinema camera made by Nokia and the system in the phone is based on OZO surround sound algorithms. The phone has three microphones, and using OZO you can apparently capture a simulated surround effect even though the output is two-channel.

Although it seems counter-intuitive, I do believe in the possibilities of simulated surround sound; after all, we only have two ears. OZO works in conjunction with the phone’s video camera so you can capture more atmospheric audio. The demo was impressive but this is something I will need to try for myself before forming a judgement.

The other aspect of the Nokia 8 which is attractive is the company’s attitude towards Android modifications and bundled apps. Essentially, you get Android as designed by Google, plus Google apps and not much else. Operators will not be able to bundle additional apps, I was told (though I am not sure I believe it).

While I do not like the way Google constantly gathers data from users of its software, I do think that if you are going to run Android, you might was well run it as designed, rather than with additional and often substandard “enhancements”.

I hope to do a full review and will look carefully at the audio performance then.

F-Secure Sense: a success and a failure (and why you should not rely on your anti-virus software)

I am in the process of reviewing F-Secure sense, a hardware firewall which works by inspecting internet traffic, rather than scanning files on your PC or mobile device. This way, it can protect all devices, not only the ones on which an anti-malware application is installed.

I get tons of spam and malware by email, so I plucked out a couple to test. The first was an email claiming to be an NPower invoice. I don’t have an account with NPower, so I was confident that it was malware. Even if I did have an account with NPower, I’d be sure it was malware since it arrived as a link to a website on my.sharepoint.com, where someone’s personal site has presumably been hacked.

I clicked the link hoping that Sense would intercept it. It did not. Here is what I saw in Safari on my iPad:

image

(Wi-Drive is a storage app that I have installed and forgotten about). I clicked More and saved the suspect file to Apple’s iCloud Drive.

Then I went to a Windows PC, and clicking very carefully, downloaded the file from iCloud Drive. The PC is also connected to the Sense network.

Finally, I uploaded the file for analysis by VirusTotal:

image

Well, it is certainly a virus, but only 4 of 58 scanning engines used by VirusTotal detect it. You will not be surprised to know that F-Secure was one of the engines which passed it as clean.

image

Note that I did not try to extract or otherwise open the files in the ZIP so there is a possibility that it might have been picked up then. Still, disappointing, and an illustration of why you should NOT rely on your antivirus software to catch all malware.

Now the good news. I had another email which looked like a phishing attempt. I clicked the link on the iPad. It came up immediately with “Harmful web site blocked.”

image

While that is a good thing, 50% of two attempts is not good – it only takes one successful infection to cause a world of pain.

My view so far is that while Sense is a useful addition to your security defence, it is not to be trusted on its own.

In this I am odds with F-Secure which says in its FAQ that “With F-Secure SENSE no traditional security software is needed,” though the advice adds that you should also install the SENSE security app.

image

F-Secure Sense Firewall first look: a matter of trust

Last week I journeyed to Helsinki, Finland, to learn about F-Secure’s new home security device (the first hardware product from a company best known for anti-virus software), called Sense.

I also interviewed F-Secure’s Chief Research Officer Mikko Hypponen and wrote it up for The Register here. Hypponen explained that a firewall is the only way to protect the “connected home”, smart devices such as alarms, cameras, switches, washing machines or anything that connects to the internet. In fact, he believes that every appliance we buy will be online in a few years time, because it costs little to add this feature and gives vendors great value in terms of analytics.

Sense is a well made, good looking firewall and wireless router. The idea is that you connect it to your existing router (usually supplied by your broadband provider), and then ensure that all other computers and devices on your networks connect to Sense, using either a wired or wireless connection. Sense has 3 LAN Ethernet ports as well as wireless capability.

This is not a full review, but a report on my first look.

image

Currently you can only set up Sense using a device running iOS or Android. You install the Sense app, then follow several steps to create the Sense network. You can rename the Sense wifi identifier and change the password. The device you use to setup Sense becomes the sole admin device, so choose carefully. If you lose it, you have to reset the Sense and start again.

My initial effort used the Android app. I ran into a problem though. The Sense setup said it required permission to use location:

image

I am not sure why this is necessary but I was happy to agree. I clicked continue and verified that Location was on:

image

Then I returned to the Sense app but it still did not think Location was available and I could not continue.

Next I tried the iOS Sense app on an iPad. This worked better, though I did hit a glitch where the setup did not think I had connected to the wifi point even though I had. Quitting and restarting the app fixed this. I am sure these glitches in the app will be fixed soon.

I was impressed by the 16 character password generated by default. Yes I have changed it!

image

I was up and running, and started connecting devices to the Sense network. Each device you connect shows up as a protected device in the Sense app.

There are very limited settings available (and no, you cannot use a web browser instead, only the app). You can set a few network things: IP address, DHCP range. You can configure port forwarding. You can set the brightness of the display, which normally just shows the time of day. You can view an event log which shows things like devices added and threats detected; it is not a firewall log. You can block a device from the internet. You can send feedback to the Sense team. And that is about it, apart from the following protection settings:

image

The above is the default setting. What exactly do Tracking protection and Identify device type do? I cannot find this documented anywhere, but I recall in our briefing there was discussion of blocking tracking by advertisers and identifying IoT devices in order to build up a knowledgebase of any security flaws in order to apply protection automatically. But I may be wrong and do not have any detail on this. I enabled all the options on my Sense.

As it happens, I have a device which I know to be insecure, a China-made IP camera which I wrote about here. I plugged it into the Sense and waited to see what would happen.

Nothing happened. Sense said everything was fine.

image

Is everything OK? I confess that I did not attach Sense directly to my router. I attached it to my network which is behind another firewall. I used this second firewall to inspect the traffic to and from the Sense. I also disconnected all the devices other than the IP Camera.

I noticed a couple of things. One is that the Sense makes frequent connections to computers running on AWS (Amazon Web Services). No doubt this is where the F-Secure Security Cloud is hosted. The Security Cloud is the intelligence piece in the Sense setup. Not all traffic is sent to the Security Cloud for checking, but some is sent there. In fact, I was surprised at the frequency of calls to AWS, and hope that F-Secure has got its scaling right since clearly this could impact performance.

The other thing I noticed is that, as expected, the IP Camera was making outbound calls to a couple of servers, one in China and one in Singapore, according to the whois tools I used. Both seem to be related to Alibaba in China. Alibaba is not only a large retailer and wholesaler, but also operates a cloud hosting service, so this does not tell me much about who is using these servers. However my guess is that this is some kind of registration on a peer to peer network used for access to these cameras over the internet. I don’t like this, but there is no way I can see in the camera settings to disable it.

Should Sense have picked this up as a threat? Well, I would have liked it if it had, but appreciate that merely making outbound calls to servers in China is not necessarily a threat. Perhaps if someone tried to hack into my camera the intrusion attempt would be picked up as a threat; it is not easy to test.

On the plus side, Sense makes it very easy to block the camera from internet access, but to do that I have to be aware that it might be a threat, as well as finding other ways to access it remotely if that is something I require.

Sense did work perfectly when I tried to access a dummy threat site from a web browser.

image

If you disagree with Sense, there is no way to proceed to the dangerous site, other than disabling browser protection completely. Perhaps a good thing, perhaps not.

It all comes down to trust. If you trust F-Secure’s Security Cloud and technology to detect and prevent any dangerous traffic, Sense is a great device and well worth the cost – currently £169.00 and then a subscription of £8.50 per month after the first year. If you think it may make mistakes and cause you hassle, or fail to detect attacks or malware downloads, then it is not a good deal. At this point it is hard for me to tell how good a job the device is doing. Unfortunately I am not set up to click on lots of dangerous sites for a more extensive test.

I do think the product will improve substantially in the first few months, as it builds up data on security risks in common devices and on the web.

Unfortunately more technical users will find the limited options frustrating, though I understand that F-Secure wants to limit access to the device for security reasons as well as making it simpler to use. The documentation needs improving and no doubt that will come soon.

More information on Sense is here.


The threat from insecure “security” cameras and how it goes unnoticed by most users

Ars Technica published a piece today about insecure network cameras which reminded me of my intention to post about my own experience.

I wanted to experiment with IP cameras and Synology’s Surveillance Station so I bought a cheap one from Amazon to see if I could get it to work. The brand is Knewmart.

image

Most people buying this do not use it with a Synology. The idea is that you connect it to your home network (most will use wifi), install an app on your smartphone, and enjoy the ability to check on how well your child is sleeping, for example, without the trouble of going up to her room. It also works when you are out and about. Users are happy:

So far, so good for this cheap solution for a baby monitor. It was easy to set up, works with various apps (we generally use onvif for android) and means that both my wife and I can monitor our babies while they’re sleeping on our phones. Power lead could be longer but so far very impressed with everything. The quality of both the nightvision and the normal mode is excellent and clear. The audio isn’t great, especially from user to camera, but that’s not what we bought it for so can’t complain. I spent quite a long time looking for an IP cam as a baby monitor, and am glad we chose this route. I’d highly recommend.

My needs are a bit different especially as it did not work out of the box with Surveillance Station and I had to poke around a bit. FIrst I discovered that the Chinese-made camera was apparently identical to a model from a slightly better known manufacturer called Wanscam, which enabled me to find a bit more documentation, but not much. I also played around with a handy utility called Onvif Device Manager (ONVIF being an XML standard for communicating with IP cameras), and used the device’s browser-based management utility.

This gave me access to various settings and the good news is that I did get the camera working to some extent with Surveillance Station. However I also discovered a number of security issues, starting of course with the use of default passwords (I forget what the admin password was but it was something like ‘password’).

The vendor wants to make it easy for users to view the camera’s video over the internet, for which it uses port forwarding. If you have UPnP enabled on your router, it will set this up automatically. This is on by default. In addition, something strange. There is a setting for UPnP but you will not find it in the browser-based management, not even under Network Settings:

image

Yet, if you happen to navigate to [camera ip no]/web/upnp.html there it is:

image

Why is this setting hidden, even from those users dedicated enough to use the browser settings, which are not even mentioned in the skimpy leaflet that comes with the camera? I don’t like UPnP and I do not recommend port forwarding to a device like this which will never be patched and whose firmware has a thrown-together look. But it may be because even disabling UPnP port forwarding will not secure the device. Following a tip from another user (of a similar camera), I checked the activity of the device in my router logs. It makes regular outbound connections to a variety of servers, with the one I checked being in Beijing. See here for a piece on this, with regard to Foscam cameras (also similar to mine).

I am not suggesting that there is anything sinister in this, and it is probably all about registering the device on a server in order to make the app work through a peer-to-peer network over the internet. But it is impolite to make these connections without informing the user and with no way that I have found to disable them.

Worse still, this peer-to-peer network is not secure. I found this analysis which goes into detail and note this remark:

an attacker can reach a camera only by knowing a serial number. The UDP tunnel between the attacker and the camera is established even if the attacker doesn’t know the credentials. It’s useful to note the tunnel bypasses NAT and firewall, allowing the attacker to reach internal cameras (if they are connected to the Internet) and to bruteforce credentials. Then, the attacker can just try to bruteforce credentials of the camera

I am not sure that this is the exact system used by my camera, but I think it is. I have no intention of installing the P2PIPC Android app which I am meant to use with it.

The result of course is that your “security” camera makes you vulnerable in all sorts of ways, from having strangers peer into your bedroom, to having an intrusion into your home or even business network with unpredictable consequences.

The solution if you want to use these camera reasonably safely is to block all outbound traffic from their IP address and use a different, trusted application to get access to the video feed. As well as, of course, avoiding port forwarding and not using an app like P2PIPC.

There is a coda to this story. I wrote a review on Amazon’s UK site; it wasn’t entirely negative, but included warnings about security and how to use the camera reasonably safely. The way these reviews work on Amazon is that those with the most “helpful votes” float to the top and are seen by more potential purchasers. Over the course of a month or so, my review received half a dozen such votes and was automatically highlighted on the page. Mysteriously, a batch of negative votes suddenly appeared, sinking the review out of sight to all but the most dedicated purchasers. I cannot know the source of these negative votes (now approximately equal to the positives) but observe that Amazon’s system makes it easy for a vendor to make undesirable reviews disappear.

What I find depressing is that despite considerable publicity these cameras remain not only on sale but highly popular, with most purchasers having no idea of the possible harm from installing and using what seems like a cool gadget.

We need, I guess, some kind of kitemark for security along with regulations similar to those for electrical safety. Mothers would not dream of installing an unsafe electrical device next to their sleeping child. Insecure IoT devices are also dangerous, and somehow that needs to be communicated beyond those with technical know-how.

Fixing Logitech Media Server for Microsoft Edge – and playing DSD

I run Logitech Media Server (LMS) on a Synology NAS. It works very well, better than when I used a Windows VM.

There is an annoyance though. Synology has been slow to keep its LMS package up to date and the official release is still 7.7.6. There are a few issues with this release, but I lived with it, until I discovered that LMS 9.x can play DSD files, using a DSDPlayer plugin that adds DoP (DSD over PCM) support. This means you can output native DSD provided you have a DSD DAC (and some DSD files to play). DSD is the format used by SACD and some audiophiles swear it sounds better than PCM.

I then discovered that Synology is showing signs of updating LMS and has a beta release of LMS 9.0. You enable beta versions in the Package Center and it will offer to update.

image

I installed, then added DSDPlayer and, hmm, I could see the DSD files but they did not play.

I found a fix for the DSD issue. A user has updated the plugin, and if you add the following plugin repository:

http://server.pinkdot.nl/dsdplayer/repo.xml

you can update DSDPlayer and it works. 

image

Now I can play native DSF files through Squeezebox Touch (you also need the EDO modification) and a Teac DSD DAC. Great.

However, I then discovered that the LMS 9.0 UI does not work in Microsoft Edge, if you have the Creators Update. The links are not clickable.

There is a fix described here. I found the commit on GitHub here. However this does not update the Synology package. I logged into the Synology over SSH and made the change manually in @appstore/SqueezeCenter/HTML/Default/slimserver.css.

It works. I’m glad because I have LMS on the Edge favourites bar, and the alternative (opening LMS in IE or another browser) is less convenient.

And yes, I use Edge, in part to keep in touch with what it is like, in part because I’m resistant to a Google Chrome monoculture, and in part because it’s pretty good now (the initial Edge release was hardly usable).

There is still a problem though. The LMS Settings page does not work in Edge. I can live with that (open in Internet Explorer) but would like to find a fix.

Update: I fixed the settings issue by installing the latest LMS 9.0 with this patch. Many thanks to LMS user pinkdot on the LMS forums. However I still needed the manual fix for slimserver.css.

Email hassles with migration to Windows 10 – if you use Windows Live Mail

Scenario: you are using Windows 7 and for email, Windows Live Mail, Microsoft’s free email application. You PC is getting old though, so you buy a new PC running Windows 10, and want to transfer your email account, contacts and old messages to the new PC.

Operating systems generally come with a built-in mail client, and Windows Live Mail is in effect the official free email client for Windows 7. It was first released in 2007, replacing Windows Mail which was released with Vista in 2006. This replaced Outlook Express, and that evolved from Microsoft Mail and News, which was bundled with Internet Explorer 3 in 1996. Although the underlying code has changed over the years, the user interface of all these products has a family resemblance. It is not perfect, but quite usable.

Windows 8 introduced a new built-in email client called Mail. Unlike Windows Live Mail, this is a “Modern” app with a chunky touch-friendly user interface. Microsoft declared it the successor to Windows Live Mail. However it lacks any import or export facility.

The Mail app in Windows 10 is (by the looks of it) evolved from the Windows 8 app. It is more intuitive for new users because it no longer relies on a “Charms bar” to modify accounts or other settings. It still has no import or export feature.

The Mail app is also not very good. I use it regularly now myself, because there is an account I use which works in Mail but not in Outlook. I don’t like it. It is hard to articulate exactly what is wrong with it, but it is not a pleasure to use. One of the annoyances, for example, is that the folders I want to see are always buried under a More button. More fundamentally, it is a UWP (Universal Windows Platform) app and doesn’t quite integrate with the Windows desktop as it should. For example, pasting text from the clipboard is hilariously slow and flashes up a “Pasting” message in an attempt to disguise this fact. Sometimes it behaves oddly, an open message closes unexpectedly. It is like the UWP Calculator app, another pet hate of mine – I press the Calculator key on my Windows keyboard, up comes the Calculator, then I type a number and it doesn’t work, I have to click on it with the mouse before it accepts input. Just not quite right.

I am getting a little-off topic. Back to my scenario: how are you meant to transition from Windows Live Mail, the official mail client for Windows 7, to the Mail app in Windows 10, if there is no import feature?

In one way I can explain this. First, Microsoft does not really care about the Mail app. Everyone at Microsoft uses Outlook for email, which is a desktop application. This is important, because it means there is no internal pressure to make the Mail app better.

Second, Microsoft figures that most people now have a cloud-centric approach to email. Your email archive is in the cloud, so why worry about old emails in your Mail client?

This isn’t always the case though. A contact of mine has just been through this exact scenario. He has happily used Windows Live Mail (and before that Outlook Express) for many years. He has an archive of old messages which are valuable to him, and they are only in Windows Live Mail.

Unfortunately Microsoft does not currently have any solution for this. The answer used to be that Windows Live Mail actually works fine on Windows 10, so you can just install it. However Microsoft has declared Windows Live Essentials, of which Live Mail is a component, out of support and it is no longer available for download.

image

Incidentally I am writing this post in Windows Live Writer, another component of Essentials, but which fortunately has been published as open source.

If you can find the Windows Live installation files though, it still runs fine on Windows 10. You do need the full setup, called wlsetup-all.exe, rather than the web version which downloads components on demand. Here it is, installed and connected on Windows 10:

image

This application is no longer being maintained though, and there are some compatibility issues with some email services. This will get worse. The better answer then is to migrate to full Outlook. However, Microsoft makes Outlook expensive for home users, presumably to protect its business sales. Office Home and Student does not include Outlook, and to buy it separately costs more, currently £109 in the UK. Another option is to subscribe to Office 365 and pay a monthly fee.

Even if you intend to migrate to Outlook eventually, it may make sense to use Live Mail for a while on Windows 10. There is an export option to “Exchange” format which means you can migrate messages from Live Mail to Outlook.

This is all more work than it should be, for what must be a common scenario. You would think that migrating from the official mail client for Windows 7, to the official mail client for Windows 10, would not be so difficult.

More on MQA and Tidal: a few observations

I have signed up for a trial of the Tidal subscription service and have been listening to a few of the MQA-encoded albums that are available. You can find a list here. Most of the albums are from Warner, which is in the process of MQA-encoding all of its catalogue.

From my point of view, having familiar material available to test is a huge advantage. Previous MQA samples have all sounded good, but with no point of reference it is hard to draw conclusions about the value of the technology.

I have used both the software decoding available in the Tidal desktop app (running on Windows), and the external Meridian Explorer 2 DAC which is an affordable solution if you want something approaching the full MQA experience.

image

Note that on Windows you have to set Exclusive mode for MQA to work correctly. When using an MQA-capable DAC, you should also set Passthrough MQA. The Explorer 2 has a blue light which shows when MQA is on and working.

image

For these tests, I used the Talking Heads album Remain in Light, which I know well.

The Tidal master is different from any of my CDs. Here is the song Born under Punches in Adobe Audition (after analogue capture):

image

Here is my remastered CD:

image

This is pretty ugly; it’s compressed for extra loudness at the expense of dynamic range.

Here is my older CD:

image

This is nicely done in terms of dynamic range, which is why some seek out older masterings, despite perhaps using inferior source tapes or ADC.

This image shows three variants of the track streamed by Tidal and captured via ADC into a digital recorder at 24-bit/96 kHz.

image

The first is the track with full MQA enabled and decoded by the Explorer 2. The second is the “Hi-Fi” version as delivered by Tidal, essentially CD quality. The third is the “Master” version, in other words the same source as the first, but with Exclusive mode turned off in Tidal, which prevents MQA from working.

You can see at a glance that MQA is doing what it says it does and extending the frequency response. The CD quality output has a maximum frequency response of 22 kHz whereas the MQA output extends this to 48 kHz at least as captured by my 24-bit / 96 kHz (the theoretical maximum frequency response is half the sampling rate).

Do they sound different though, bearing in mind that we cannot hear much above 20 kHz at best, and less than that as we age? I have been round this hi-res loop many times and concluded that for most of us there is not much benefit to hi-res as a delivery format. See here for some tests, for example.

MQA is not just extended frequency response though; it also claims to fix timing issues. However my captured samples are not really MQA; they are the output from MQA after a further ADC step. Of course this is not optimal but the alternative is to capture the digital output, which I am not set up to do.

An interesting question is whether the captured MQA output, after a second ADC/DAC conversion, can easily be distinguished from the direct MQA output. My subjective impression is, maybe. The first 30 seconds of Born Under Punches is a sort of collage of sounds including some vocal whoops, before David Byrne starts singing. What I notice listening to the Tidal stream with MQA enabled is that the different instruments sound more distinct from each other making the music more three-dimensional and dramatic. The vocals sound more natural. It is the best I have heard this track.

That said, I have not yet been able to set up any sort of blind test between the true MQA stream and my copy, which would be interesting, since what I have captured is plain old PCM.

There is a key point to note though, which is that mastering offered by Tidal is better than any of the CD versions I have heard; the old Eighties mastering is more dynamic but sounds harsher to my ears.

With or without MQA; you might want to subscribe to Tidal just to get these superior digital transfers.

Update: it seems that the Tidal stream for Remain in Light (both MQA and Hi-Fi) is a different mix, possibly a fold-down from the 5.1 release. So it is not surprising that it sounds different from the CD. The question of whether the MQA decoded version sounds different still applies though.

-

The MQA enigma: audio breakthrough or another false dawn?

The big news in the audio world currently, announced at CES in Las Vegas, is that music streaming service Tidal has signed up to use MQA (Master Quality Authenticated), under the brand name Tidal Masters. MQA is a technology developed by Bob Stuart of Meridian Audio, based in Cambridge in the UK, though MQA seems to have its own identity despite sharing the same address as Meridian.

image

What is MQA? The question is easy but the answer is not. Here is the official short description:

Conventional audio formats discard parts of the sound to keep file size down, but part of this lost detail is the subtle timing information that allows us to build a realistic 3D soundscape in our minds. … With MQA, we go all the way back to the original master recording and capture the missing timing detail. We then use advanced digital processing to deliver it in a form that’s small enough to download or stream.

At first sight it looks like another format for lossless audio, and the description on MQA’s site confuses matters by making a comparison with MP3:

MP3 brings you just 10% of what was recorded in the studio. Everything else is lost to fit the music into a conveniently small file. MQA brings you the missing 90%.

There are two problems with this statement. One is that MP3 (or its successor AAC) actually sounds very close to the original, such that in tests most cannot tell the difference; and the other is that audiophiles tend not to use MP3 anyway, preferring formats like FLAC or ALAC (Apple’s version) which are lossless.

There is more to it than that though. There are three core aspects to MQA:

1. “Audio origami”: MQA achieves higher resolution than CD (16-bit/44.1MHz) by storing extra information in audio files that is otherwise wasted, as it stores audio that is below the noise floor (ie normally inaudible). There is a bit of double-think here as removing unnecessary parts of audio files is the sort of thing that MP3 and AAC do, which the MQA folk have told us is bad because we are not getting 100%.

This is also similar in concept to HDCD (High Definition Compatible Digital), a technology developed by Pacific Microsonics in the Eighties and acquired by Microsoft. Of course MQA says its technology is quite different!

Note that you need an MQA decoder to benefit from this extra resolution, and there is a nagging worry that without it the music will actually sound worse (HDCD has the same issue).

2. Authentication. MQA verifies that the digital stream is not tampered with, for example by audio features that convert or enhance the sound with digital processing. This can be an issue particularly with PCs or Macs where the built-in audio processing will do this by default, unless configured otherwise.

3. Audio “de-blurring”. According to MQA’s team:

There’s a problem with digital – it’s called blurring. Unlike analogue transmission, digital is non-degrading. So we don’t have pops and crackles, but we do have another problem – pre- and post-ringing. When a sound is processed back and forth through a digital converter the time resolution is impaired – causing ‘ringing’ before and after the event. This blurs the sound so we can’t tell exactly where it is in 3D space. MQA reduces this ringing by over 10 times compared to a 24/192 recording.

If this is an issue, it is not a well-known one, at least, not outside the niche of audiophiles and hi-fi vendors who historically have come up with all sorts of theories about improving audio which do not always stand up to scientific scrutiny.

So is MQA solving a non-problem? That’s certainly possible; but I do find it interesting that MQA has received a generally warm reception from listeners.

Here’s one audiophile’s reaction:

Have never really “done” digital before. 16/44 has always sounded ghastly to my ears right from the start and still now. MQA did indeed “fix” the various forms of distortion that I could hear present in everything where the sampling rate was taken down to just 44. … My findings – those of an improved sense of solidity in the stereo image and the lack of that horrendous crystalline glassy edge to things, especially on the fade, seem to be being mirrored in what people are hearing. It doesn’t have that thing I describe as a “choppy sense of truncation” which I suspect others mean by “transients”.
Basically, per the post above, it’s a bit like “good analogue”. Digital can finally hold its head up high against an analog from master-to vinyl performance. And not only that, hopefully, walk all over it and give us something genuinely new.

If this history of audio has shown us anything, it is that subjective judgements about what makes something sound better (and whether it is better) are desperately unreliable. Further, it is often hard to make true comparisons because to do requires so much careful preparation: identical source material, exactly matched volume, and the ability to switch between sources without knowing which is which, to avoid our clever brains from intervening and telling us we are hearing differences which our ears alone cannot detect.

We should be sceptical then; and even possibly depressed at the prospect of a proprietary format spoiling the freedom we have enjoyed since the removal of DRM from most downloadable audio files.

Still … is it possible that MQA has come up with a technology that really does make digital audio better? Of course we should allow for that possibility too.

I have signed up for Tidal’s trial and will report back shortly.