Saturday 24 June 2017

MEASUREMENTS: AudioQuest Dragonfly Black 1.5 - PART 2 (On "MQA Rendering")

AudioQuest Dragonfly Black - "rendering" MQA playback. Dragonfly logo turns purplish...

For those keeping track, the story of MQA has slowly evolved over the last few years. Initially, it was supposed to be just a technology incorporated into DACs that would take an encoded file (typically 24/44 or 24/48) and almost "magically" transform this data into the "original" high resolution (equivalent to 24/192+) sound while maintaining standard PCM playback compatibility. This was the case with the first products like the Meridian Explorer 2. Over the years, it appears that concessions have been made. Software decoding rather suddenly was introduced in early 2017 with TIDAL's support for MQA; branded as "MQA Core". At around the same time, DACs were classified as either "MQA Decoders" for components that can handle these 24/44 or 24/48 files start to finish (Meridian, Mytek DACs), and "MQA Renderers" which require the computer software perform initial "unfolding" to high-res samplerates (88 or 96kHz) followed by some kind of final processing to the sound performed by the DAC presumably to allow the sound to be more "accurate" to the studio production, or "authentic" (whatever that means).



With the recent firmware upgrade to version 1.06, the Dragonfly Black DAC now belongs to this latter category of "Renderer". As far as I am aware, it is the first of these kinds of devices to be released without "Decoding" capability.

If we take a step back and look at the big picture, there are implications to this two-step technique:

1. The Dragonfly DAC does not have enough horsepower to decode the "MQA Core" algorithm itself. I've already written about this back in January 2017 suggesting that the Dragonfly 1.5's Microchip PIC32MX microcontroller wasn't a particularly powerful device. Therefore, it has to offload the processing step to the computer. In the image above, I have Audirvana+ 3.0.6 running on a recent USB-C MacBook which is able to perform software decoding. At this time, the only other general computer software that performs MQA Core decoding is the aforementioned TIDAL app.

2. The 24/88 or 24/96 "origami unfolded" output then must contain within the bitstream instructions for the MQA Renderer (whether it be the Dragonfly or another render-compatible DAC like the Mytek Brooklyn) to perform the "proper" final processing.

Other than those who have signed NDA agreements with MQA, how exactly all this works is a bit of a mystery. Technically, the presumption is that the MQA Core step is more complicated as it takes the encoded standard sample rate data and expands it to "fill in" what's needed to create the higher samplerate output up to 96kHz. The "Render" step however would have to be simpler for low-power devices like the Dragonfly. It would involve the DAC examining the bits to look for the MQA fingerprint, decode embedded instructions, and using whatever it finds to presumably tell the DAC to output the intended analogue signal.

Based on the comments from MQA over the years, especially their insistence that digital filters make a huge difference in sound (see their PR video for example), it is literally a given that whatever the "Renderer" does would involve the use of "special" filter parameters. And given their apparent dislike of linear phase filtering (perhaps fear of the the "acausal" pre-ringing bogeyman?), it's pretty well also a given that whatever MQA is doing with the sound, minimum phase filtering will be the technique of choice.

In preparing for this post, I reached out to Måns Rullgård on Computer Audiophile as he has been digging around the MQA firmware since earlier this year. In his research, he has been able to discover enough about the decoded stream such that an arbitrary WAV file can be encoded with the MQA parameters to explore how a device like the Dragonfly Black performs when responding to these instructions!

To be clear, obviously while this "reverse engineering" can shed light into the device, it's of course not necessarily complete. Also, although certain features may be uncovered, it doesn't mean these features are actually being used in real MQA-encoded music out there.

So... With Måns' gracious assistance, let's have a look at some of the main parameters that an "MQA Renderer" like the Dragonfly Black appears to respond to...


1. DITHER BIT DEPTH

This is relatively straight forward. The DAC can be instructed to perform dithering on the 24-bit MQA Core unfolded data to a certain bit-depth. The options appear to be 15, 16, 18, and 20-bits.

Here's what the noise floor looks like on the Dragonfly Black with a "digital silence" track played back and recorded with my Focusrite Forte ADC (note that the recording was done to demonstrate relative differences, not measure the absolute noise floor):


The 18-bit and 20-bit options didn't seem to make a difference in my testing. As usual, ignore the 37kHz noise which is originating from the ADC, not the Dragonfly and also the ADC modulation noise is obvious from 55kHz onward.

2. DITHER NOISE SHAPING

In the graph above, I noted "noise shaping off". Well, the MQA stream can specify if noise shaping should be turned on and of what variety.


As you can see, there are 4 options from 0-3. In the Dragonfly, settings 0 and 2 appear to be the same. Setting 1 is quite different. One interesting comment when I showed Måns this result was that it appears the Dragonfly is applying the dithering before upsampling from 96kHz with the first peak around 35kHz. In his examination of another device, the noise shaped dithering seems to be performed after upsampling to 192kHz with the 35kHz peak pushed up to ~70kHz. Interesting how different devices can be performing the technique differently.

I have also asked my friend with the Mytek Brooklyn to examine this phenomenon. From preliminary results, with the recent Booklyn 2.34 firmware, the device doesn't seem to be responding to the noise shaping parameter. Another interesting observation is that Meridian's Explorer 2 DAC (latest 1717 firmware) doesn't seem to recognize the "MQA Render" embedded data at all. I suspect this is a result of the evolution of MQA since the Explorer 2 was one of the earliest MQA-decode DACs and the firmware was not programmed to acknowledge these "Render" instructions. Basically it looks like different MQA devices could behave differently!


3. RESAMPLING FILTER SELECTION

Embedded within the MQA stream is another parameter that tells the Renderer DAC the "original" samplerate of the encoded music. This can range from 44/48kHz all the way to samplerates no audio device currently supports :-).

With the Dragonfly, by default, if we just send a non-MQA 24/96 signal to it, it uses a filter with the following impulse response:


As you can see, this is the normal sharp minimum phase response similar to what we saw last week when looking at the non-MQA measurements.

However, if the device is told that this is "originally" a 24/96 signal and the MQA DAC should "render" it as such, the filter changes and the impulse response looks like this:


A very short minimum phase filter is used. Hmmm. OK, no surprise given the MQA PR material.

Where it gets interesting however is when the Renderer is told that the "original" resolution requires further upsampling. For example, if the "original" samplerate is specified as 192kHz, the Dragonfly will then pick from up to 32 pre-specified filters selected within the data stream to use. I don't know if all 32 potential settings are defined or not, and it really don't matter for the sake of this article. For illustration, I'll just show you the impulse responses from settings 0-8 along side the standard filter shown above:


I have no idea how or why MQA selected these filter variants. I'm not even sure if there's some kind of systematic order to them! At least we can say filters 0-8 are all minimum phase, of very short duration with minimum post-ringing. Of these first 9 filters, clearly "MQA 2" looks most different. According to Måns, analysis of MQA files "in the wild" suggests that filters 4, 6, and 8 appear to be the common ones with 4 being somewhat of a default.

My assumption is that if MQA somehow is "authentic", then all MQA certified DACs should utilize the same set of filters... It will certainly be interesting to see if other DACs produce the same table of filter variants!

Thoughts...

Måns has found a few other interesting "features" in MQA which I'll leave for him to describe as he sees fit in the future :-). But for the purpose of our exploration of the Dragonfly, I think the above captures most of what a "Renderer" does. As you can see, it's not really that complicated. Other than upsampling with the various filter settings, there is no evidence that the Dragonfly is able to "unfold" any more data from the MQA Core's 88/96kHz output into the 192kHz or 384kHz realm for example. The obvious question is therefore: just how much difference can this make in the sound!?

To answer this question somewhat, we have to go back to remembering why we want digital filters in the first place. For more, I refer you to the article from last year: Digital Interpolation Filters and Ringing among others I'm sure you'll find online. The bottom line is that a reconstruction filter allows us to remove imaging distortions during playback. Presumably, if we are interested in "high fidelity" and what is "heard" in the studio, we would want to reproduce the frequencies intended to be included in the audio file within the limits of the samplerate (ie. below Nyquist) and not want distortion components seeping through.

Remember that an impulse response is the waveform of what an isolated 100% amplitude sample looks like when passed through the filter. It shows us the coefficients used for the FIR "taps". With a larger number of taps (longer filter), one can achieve better stopband attenuation with good alias suppression. But longer filter lengths doesn't implicitly mean that there's more "temporal smearing" of audible frequencies nor would there automatically be more ringing in the audio output just because it's present in the impulse response! If you look at a well recorded piece of music, low-pass filtering will have already been done in the studio and there will not be any distortions in the signal to trigger "ringing" when upsampled. The only time you will see (and perhaps hear) ringing will be with very poor quality, typically extremely loud, compressed music with clipping and square wave anomalies.

As expressed in my "Digital Interpolation Filters" post, it's not really good enough to just look at the impulse response... We have to consider the Fourier transform pair and understand what's going on in the frequency domain! Notice how MQA keeps harping on the idea of "smearing" in digital audio, and emphasizing "time domain" performance as if this is some kind of idol which everyone should bow before simply because of these short impulse responses while neglecting the importance of what's happening in the frequency spectrum with these filters.

Well folks, let me show you... First look at the normal Dragonfly filter (presumably standard ESS filter setting for the DAC chip) as it plays 24/96 data. It does a pretty good job filtering out audio >48kHz with fairly steep attenuation such that harmonics such as the cluster around 60kHz is down by about -60dB compared to the primary signal at 20kHz:


Here's what happens if we use the 96kHz MQA filter:

Not bad still when it comes to the 19 & 20kHz tones. But it is a weaker filter and the main difference is as you can see, the steep filtering around Nyquist at 48kHz is gone and imaging is allowed to creep into the 50-60kHz frequencies with the wideband noise signal.

And suppose now we tell the Dragonfly that the "original" resolution was 192kHz and it should pick MQA Filter 2 for further upsampling:

Hmmm. Clearly we're now seeing frequencies extending all the way to 96kHz (using a 192kHz recording) even with the -4dB wideband noise signal. We're now also seeing imaging of the 19 & 20kHz tones up at 76 & 77kHz!

Here's a look at Filter 4 which as I mentioned above is a very common "default" choice in the music released thus far in MQA:

Again, like in Filter 2, the wideband noise is allowed to leak all the way to 96kHz. Also, there's the 76 & 77kHz "image" again. If you look a little closer, you will also notice that the Filter 2 and 4 graphs have slightly lower 19 & 20kHz sine peaks. This is because my usual -6dBFS peaks caused severe  overloading with these MQA filters whereas this was not a problem with the standard non-MQA filter and even the MQA 96kHz filter. I had to lower the 19 & 20kHz sine amplitudes to -9dBFS to avoid clipping.

Whether you "like" the look of the digital filter impulse responses or think that there's any substantial improvement in "time domain" performance, these filters used in the Dragonfly Black when "rendering" the MQA Core decoded audio are just plain poor when it comes to antialiasing performance! I might as well tell my TEAC UD-501 DAC to turn off the digital filtering altogether and let TIDAL software decode to 24/96 and I would have essentially achieved the kind of effect we're seeing.

Using RightMark, we can also appreciate that the filters do measure somewhat differently:


What we see here is a comparison done at 24/96 using test signals tagged with MQA and with different filters selected. We have the standard non-MQA filter, the MQA filter for 96kHz decode, then a series of 192kHz filters - the somewhat atypical appearing Filter 2, and the most similar and commonly used 4, 6, 8 trio found in most of TIDAL's albums out there. Remember that the numerical summary only analyzes the data from 20-20kHz so the values look very similar. However, the graphs show us where the differences lie... As you can see, the filters do change frequency response somewhat although none will dip more than -1dB at 20kHz. Also we can see that they create varying levels of distortion as shown with the IMD+N sweep. Not surprisingly, the stronger, more "orthodox" standard 24/96 filter results in the flattest frequency response, lowest noise, and potentially least distortion.

I appreciate that with real music, and real ears, the sound will not appear as noticeable as the diagrams above since naturally we can't hear all that noise above ~20kHz and recordings typically don't contain much ultrasonic high frequency content. But is there any rationale to think that this technique improves "temporal smearing" when we see all this frequency anomaly!? Remember that depending on the equipment itself, the imaging distortions above Nyquist can fold back into audible frequencies with varying severity and have unintended effects on what is heard (check out this YouTube video for a demonstration of what "aliasing"/imaging presents like). I think it's more likely that all those who hear "huge differences" between MQA and non-MQA Studio Masters are hearing the effect of the distortion than some kind of temporal domain improvement. Presumably these people like what the ultrasonic imaging is doing in their systems!

If you're wondering, yes, these anomalies do show up with actual MQA-encoded music. For example, here's a spectral plot at the same point in TrondheimSolistene's excellent recording of Britten's "Simple Symphony" which can be freely downloaded from 2L:


If we take the 24/192 download and play back through the SMSL iDEA DAC in GREEN as "reference" (remember, the Dragonfly Black cannot play 192kHz), we see that the Dragonfly Black playing the official 24/96 download is very similar in comparison (in RED). Likewise, without any special "rendering", the SMSL iDEA's playback using Audirvana+'s MQA Core decoding into 24/88 from the MQA file in YELLOW appears quite capable of reconstituting the source material up to ~44kHz (notice that cluster of frequencies above 35kHz is generally retained).

But look at what happens with an actual "rendering" using Audirvana+ 3 and the Dragonfly Black with MQA activated in BLUE (ie. the dragonfly LED is now that MQA purple color). Clearly, some details are being drowned out in noise like that cluster from 35-45kHz. Also, we can see all those frequencies above 60kHz which were not present in the original recording! This is the result of the weak filter and subsequent imaging distortions.

I had a listen to some of the MQA-encoded 2L tracks ("Simple Symphony", "Arnesen Magnificat") as well as some off TIDAL (Led Zeppelin, Madonna, Beyoncé, The B-52's, Eric Bibb) and compared the sound with "Studio Master" versions (typically from HDtracks). Whether I thought the MQA version was better or not is really hard to say subjectively. Sure, MQA-decoded playback sounds similar compared to the same "Stereo Master" download. But I can't in all honesty claim that MQA adds a more "relaxing" property nor did I hear anything wonderful like an expansion of the soundstage, added bass, or other such superlatives some have claimed. I've done comparisons before with software and hardware decoding and I'm not hearing nor "seeing" anything here with the Dragonfly Black that would change those opinions substantially. If anything, I'm actually more concerned about this MQA "Rendering" process of the Dragonfly Black deteriorating sound quality.

Conclusions...

Over the last few weeks, as I listen to the Dragonfly Black and try to wrap my head around what I'm hearing and seeing with MQA, I continue to ask myself the simple question... Do I want this?

As you know, over the last few years I've put up a number of blog posts exploring MQA. How it's supposed to do what it claims, what we're seeing empirically with the first MQA DACs, sonic effect with TIDAL software decoding, and more recently use of hardware for decoding... And today, exploring what's "under the hood" with the "MQA Rendering" function thanks to Måns and his work over the months.

The bottom line is that I am not surprised by what I'm seeing. As I had suspected back in January, rendering is basically the DAC looking out for an MQA "command" in the data stream, setting the DAC's digital filter and associated low-level processing (eg. dithering and/or noise shaping). As expected (and speculated on recently), the MQA folks like minimum phase and short FIR filters.

So, do I want this? Let me be blunt, while the blue frequency spectrum above with "Simple Symphony" might sound different, and some people might like it (the extra noise, like with DSD64 playback can be pleasing to some), I personally think this is not a good effect and antithetical to high fidelity playback. In fact, playing the MQA track decoded using Audirvana+ without "MQA Rendering" on the SMSL iDEA results in better low-level detail and objectively is closer to the original "master". Like I said not long ago, that term "authenticated" (MQA) appears completely meaningless at least when it comes to sound quality.

No. I personally do not want this. So far, I have seen no evidence that what this is doing actually improves quality. "Time domain" improvements, if any, appears to be contingent on the idea that one believes these short filters can achieve the goal and are thus desirable. I do not see where the company has so far provided evidence for their claims. If there are improvements that can be made by some kind of proprietary time-alignment DSP, then just offer it as such instead of wrapping it into this unnecessarily complex encoding scheme. As for the data compression being beneficial... Well, since the announcement of MQA in late 2014, my Internet download speed has already gone up from 25Mbps to 150Mbps for around the same price. 4K HDR video streaming with multichannel sound is a reality. And Qobuz is now offering Sublime+ lossless FLAC audio streaming up to 24/192. So why bother streaming partially lossy 24/48 when you know technically full resolution, unmolested 24/96 or 24/192 can actually already be done and will only get easier and cheaper in the years ahead?

As I end off, I want to bring up again the obvious dichotomy between what MQA is doing here with their weak filters and what Chord is doing with their DACs like the DAVE. They're like polar opposites! Chord is using a proprietary FPGA-based, very steep, "brick wall" filter with literally uncompromised antialiasing using a reported 164,000 tap linear phase filter, and on the other side we have Meridian/MQA with their insistence on these minimum phase, "porous", weak filters desiring that DAC manufacturers should go this route (for a licensing fee of course).

So which approach is right? I guess I can offer you the usual recommendation... "Go listen for yourself." :-)

As for myself, I believe Chord is "more right" than Meridian/MQA. My sense is that the MQA filters above are much too weak and sonically dirty (I believe some impressionable members of the audiophile press like dirty sound if they indeed hear what they claim they hear!). A very steep filter like the DAVE might not be necessary but I typically listen to upsampling with strong filters like that and enjoy the sound quality with no issues audibly or intellectually around time-domain performance.

Again, I would like to thank Måns Rullgård for all his work and stimulating ideas! He is certainly the brains behind allowing us all to have a peek under the MQA hood to see a substantial portion of how it ticks. Also, a big shout out to my buddy John for lending me the Dragonfly Black for the last few weeks of testing!

----------------------------

Recently, I came across this interesting YouTube post entitled "Tidal Masters MQA - Why the Frequency Spectrum Gap?":


Well, in light of what we're discussing here, you probably guessed that the cause for this "gap" is because of the MQA filter. Let's reproduce what the YouTuber showed...

Take Beyoncé's song "Love Drought" (off Lemonade) as used in the video (16/44 is fine); here's a convenient spot 40 seconds into the track:

We see that the 44kHz track rolls off starting at 20kHz.

Now, let's upsample with a filter setting that is modeled after MQA... Here's an iZotope RX 5 setting that approximates the 24/96 MQA filter from the Dragonfly:


Now let's have a look at what happened to the 44kHz track upsampled to 96kHz (it could of course just as well have been 88kHz, doesn't matter). Have a peek at the same spot as above, 40 seconds into the song:


Et voilà! There's your "gap". What you're seeing is a "mirror image" beyond 22.05kHz (Nyquist for 44.1kHz samplerate) due to imaging from the use of the MQA-like digital filter that has poor anti-imaging properties. These are of course false frequencies that were not part of the original song. Because this track has that significant roll-off of frequencies around 20kHz, it's easy to see this U-shaped anomaly.

Remember back in the day when HDtracks sold some fake high resolution tracks that were just upsampled 44/48kHz material as 88/96kHz? Well, this is an example of what happens when 44/48kHz audio is "unfolded" into 88/96kHz by the MQA Core decoder in TIDAL using their "leaky" filter of choice allowing the ultrasonic distortions to seep through. It's an example of "fake hi-res" MQA unfolding.

We have to really question what it means to be "Master Quality Authenticated" when this kind of stuff happens. Clearly that aliased image is NOT part of what Beyoncé and the engineers created, "heard" or likely intended to be present in the recording! I find it rather amusing that Lemonade in MQA should even be promoted as something worthy of "Master Quality".

Because of this kind of stuff happening, I applaud HiresAudio for taking a stand on not selling MQA files because of an inability to verify the content as true high resolution. The fake high frequency material is easy to see here, but if it were not for the 20kHz roll-off to create that ugly "gap", it would certainly be much more difficult to realize that there is something very wrong.

There is no magic in digital audio. And as far as I can see, there is no "gnostic" wizardry in MQA either when we look a little deeper. To all the "evangelists" out there... Beware of worshipping at the alter of the god of "time-domain" with salvation offered in the form of MQA and its weak filters. You'll likely find your faith challenged and ultimately recognize that it was misplaced despite a genuine desire for higher fidelity.

Remember, you can find significant levels of satisfaction in frequency & time-domain accuracy with room correction measurements and DSP. There is no free lunch, and it will likely take time and effort to get it just right for your DAC, speakers and sound room. For reference, here's the zoomed in 24/96 correction impulse response used for my DAC (TEAC UD-501), room and speakers (Paradigm Reference Signature S8) created with measurements using Acourate meant to be used with a convolution engine like with JRiver or BruteFIR. Notice that in real life, when customizing the impulse response for the system, the coefficients are more complex than those MQA filters and also varies for each stereo channel depending on arrangement and slight differences between speakers:




----------------------------

Finally we're seeing some great weather here in Vancouver. It's summer time and as I mentioned weeks ago, I've got some R&R coming plus exciting projects on the go at work. Hope the weather's good wherever you are...

I'll likely be "gone fishing" for a little bit :-).

Enjoy the music!

Addendum December 2017:
Check out Mans Rullgard's Audioquest Dragonfly Black Teardown. Great stuff!

43 comments:

  1. Do you have a means of separating the IMD from the noise?

    ReplyDelete
    Replies
    1. Hi Arve,
      Good point about separating what is IMD vs. noise component for those measurements comparing the different MQA filters. IMD should not be from the filters themselves and I suspect it's the aliasing that causing the anomalies observed in the total IMD+N plot.

      Delete
  2. As the innards of MQA get opened up it looks more and more like HDCD. Even the selectable filters. Is there a mechanism for switching filters on the fly to react to different content?

    The inventors of HDCD see no reason for it any more with high sample rate recording becoming quite available. Microsoft is the only injured party (They own the IP for HDCD) but the patents may have expired so no value to them.

    What is interesting is to look at the system response as the impulse is stretched from one sample to 2, 3 and 4. Especially since its very difficult to get a one sample wide acoustic source. Even spark discharges are not that sharp. It may shed light on the "time smear" claim.

    ReplyDelete
    Replies
    1. These experiments are looking only at the "render" portion of the complete MQA decoding process. The preceding "core" decoder does a lot more.

      As for filter switching, yes this is possible. but I have not seen it done.

      Delete
    2. Måns,

      Glad to finally know your name. It's always so frustrating on CA using these alias!

      Here's the big problem with everything above in this one statement:

      "These experiments are looking only at the "render" portion of the complete MQA decoding process. The preceding "core" decoder does a lot more."

      The thing is you can't really look at just the renderer portion as this is an entire system. You have the encode, core and the renderer. How can you test one part without the rest. When I compare a Tidal Master file against something I have local on the same device, I have to take into account sample & bit rate right? So how can I make conclusions on what the renderer does without the Core and the Encode portion.

      Really the only way to test any of this stuff would be to have MQA deliver test files of various types. That way you could run these in Audirvana or Tidal whatever and get some real results.

      Thanks,
      Gordon

      Delete
    3. Hi Gordon, just to add my opinion to this...

      Remember that this post is intended to be a look at what the "Renderer" does; hence as Part 2 of the posts on the Dragonfly. The new firmware obviously opened up the capability to tell the device "here's an MQA data stream" and here's what it can now do with dithering and filter selection!

      We certainly are aware that we don't know the intricacies of the Encoder or Core Decoder. But I can certainly see that the high-res reconstitution to 88/96kHz looks quite good as previously posted.

      I think the ability now to demonstrate the types of filters MQA is using, to be able to put "two and two" together such as the spectral comparison with 2L's "Simple Symphony" or to model the filter and explain the YouTube question about the frequency "gap" in Beyonce's song allows us to understand and evaluate the technique for ourselves. An opportunity for the consumer to adjudicate quality, fit for purpose, and that the technique is "as described".

      Sure, we might not know everything, but we can still come to some understanding about quality and apparent limitations!

      Delete
    4. Gordon, perhaps you don't realise just how simple the render stage actually is (I have no idea what MQA delivers to you). The format between the core decoder and the renderer is plain PCM with a 1-bit metadata stream embedded in the LSB. The setup information for the renderer is only a few bytes in size, repeated continuously so as to be unaffected by seeking. Since we've figured out the format of this metadata stream, we are able to add it to arbitrary PCM signals using parameters of our choosing. Furthermore, we know that the rendering process consists of upsampling and shaped dither. Using suitable test signals, we can thus probe the characteristics of each setting. That's what this post is all about.

      As an alternative view, suppose the core decoder provides a perfectly reconstructed signal at 96 kHz. Our tests then aim to find out what the renderer does with this signal. There is obviously much complexity in the encoder and core decoder, but we can still gather useful information from looking at the renderer in isolation.

      Delete
  3. Hi Archimago, Hi Måns

    And impressive Work. The best, that I have seen yet in the public. Great.

    Cheers, Juergen

    ReplyDelete
    Replies
    1. Appreciate the note Juergen!

      Thanks again for the discussion over the years and your ideas around the digital filter graphs ;-).

      Delete
  4. Dear Archimago: You make too much sense! And have put into words and measurements what I have been expressing for years. Congratulations. You brought to light something which I myself had not realized: that my Acourate-corrected playback system takes into account the filters in my DACs! I routinely take measurements at 48 kHz sampling, which Uli then up and downsamples to create the filters for each rate. Intellectually, however, if we realize that the DAC may use different filtering at each rate, it would seem the most accurate correction would be to measure the loudspeakers at each sample rate! However, I know my DAC designer uses an ASRC with a fixed output rate of a little higher than 192 kHz (to reduce beating). So either this idea stays simple or gets way complicated! I don't have the patience to measure my 5.1 system with stereo subs six times! Please write me directly at bobkatz[at sign]digido.com so we can shmooze directly about these consternations!

    ReplyDelete
    Replies
    1. Thanks Bob for dropping by! I've certainly enjoyed reading your columns on Inner Fidelity, the posts on the Dynaudio Evidence M5P last year... Not to mention your books and mastering output over the years!

      I certainly concur regarding the quality of Acourate and Uli's work. At some point I'll drag out the measuring mic for the surround system :-).

      Will drop you a note over the summer...

      Delete
  5. Archimago,

    Really I would like to applaud you for the work you have done here. I would agree with Juergen that you did a great job.

    The AD converter could have been a little better but otherwise really well written.

    May I ask were you got MQA test files to generate the results? Would it be possible to get them as I don't even have anything to test with.

    I will say that I compared a number of my HD Track files to Tidal Master files and wondered about the differential in sound. It was later pointed out to me that they came from different sources.

    I will add a few more comments on statements above.

    Thanks,
    Gordon

    ReplyDelete
    Replies
    1. Thanks Gordon for your comments. Great work with getting asynchronous USB into the hands of audiophiles over the years.

      Yes, the ADC can certainly be better. But since I'm doing this as a hobbyist, I figured this was enough to get me the results I wanted to learn about. ;-)

      The MQA test files were created based on custom software using parameters discovered by Mans to have the stream played back by the Dragonfly using the selected bit depth / noise shaping / filter. I'm amazed that you don't have the equivalent test files! Honestly, I was hoping you could help verify what we've posted here just in case I made some mistake in the impulse waveform chart!

      I gotta admit, it's a strange situation with MQA. On the one hand I know the company and Bob Stuart have given many interviews, attended the trade shows, and many articles have been written. But on the technical side, the consumer is really left in the dark I think about the mysterious workings and claims the company makes (hence the bits of "investigative journalism" like this article) in light of the maturity of digital audio over the decades.

      The test files are rather simple and I'll chat with Mans about putting them up.

      BTW, I've wondered about the TIDAL vs. HDtracks mastering as well. Many do appear to be the same mastering and "null" out quite well, whereas a few others like Joni Mitchell's Blue appear to have very slight alterations made. Apart from the MQA "origami" compression process, can you tell us if MQA actually applies a DSP process to the audio to specifically "remaster" the sound?

      If so, as consumers, could we have access to perhaps a sample non-MQA-encoded 24/192 before and after this DSP process (whatever it does, presumably the "temporal deblur"). This would be a great way for so many to assess the sonic impact!

      I know this might not be something you can do, but perhaps a suggestion to pass along to the MQA folks...

      Delete
  6. Arch, and Måns, interesting measurements!

    Thank you for pointing out, “Notice that in real life, when customizing the impulse response for the system, the coefficients are more complex than those MQA filters…” Folks need to know that using DSP software like Acourate, Audiolense, open source DRC, etc., one is designing a more powerful filter than MQA, and includes the entire playback chain, even software music players like JRiver for example. Then your DAC, amps, speakers and room. Even within speakers/room one can create digital XO, driver linearization and time alignment, and in-room, construct your own preferred target frequency and timing response arriving at your ears. With a high degree of accuracy and precision.

    If a custom digital filter is to be designed and constructed, why not include all components of your sound reproduction chain in your own environment? Now that’s an end to end solution.

    Keep up the good writings!

    Mitch

    ReplyDelete
    Replies
    1. Thanks for the note Mitch! Hope you're enjoying the sunshine out on the Sunshine Coast :-).

      Yes, an important reminder of what "end-to-end" should mean if it is to imply an ability to completely account for the anomalies in the listening chain! To not be able to address especially the speakers and room is really missing out on the crucial "quality determining" components.

      Delete
    2. Arch, indeed, really enjoying the sunshine on the Sunshine Coast, and Whistler, as I am sure you are in Vancouver!

      I wonder about the audibility of MQA’s render digital filters when a typical loudspeakers timing distortion is orders of magnitude greater than any of the filter shapes you have presented… Even the post ringing in MQA 2 filter is quite likely inaudible as you explain in your article on Digital Interpolation Filters and Ringing

      Something like 95% of speakers are not time coherent so what is the value of MQA with its potential 31 different filters when they are completely time distorted by the loudspeaker? Folks need to know that MQA does not correct the timing distortion in multi-way, non-time aligned loudspeakers, so it is not an end to end solution that MQA is touting. Even with time-aligned loudspeakers, it is unlikely that any of these filter settings would make an audible difference. For folks wondering about the value and audibility of time coherent loudspeakers, try Rod Elliot’s article on Phase, Time and Distortion in Loudspeakers

      How would I know? For over seven years I have played with custom digital filters that include the entire audio playback chain, plus loudspeakers and room. I have experimented with custom digital filters that have significantly more ringing than shown here and when switching back and forth, on a set of time coherent speakers, only with gross amounts of ringing (pre or post) am I am able to discern an audible difference with music as the dynamic test signal. Folks can read about this in my book on Accurate Sound Reproduction Using DSP

      I really don’t get the consumer audio industries fixation on small signal digital filtering, instead of the business end that actually transmits the music to ones ears i.e. the loudspeakers. If one is to apply digital filtering, it would be to correct the (gross) frequency and timing response of speakers, which already encompasses the DAC’s digital filtering, or software music players digital filtering, regardless of what type of digital filter is being used. This is where real audible differences are. That’s the irony. It’s like the consumer audio industry is looking through the wrong end of the telescope. Bizarre.

      Delete
  7. Hi guys. Regarding the use of "end to end" solutions such as Acourate: I'd appreciate your analysis of whether the first stage of the MQA decode process is a help or a hindrance. My reproduction system includes linear phase digital crossovers, and at the least I would hesitate to buy into a whole new set of (hopefully) unnecessary DACs.

    ReplyDelete
  8. Beyonce MQA Gap

    This is an error within the Tidal App. The Tidal desktop app does oversample this to 88k2 (with the very soft = leaky MQA Filtering), but Audirvana detect the 44k1 24 Bit MQA File as 1FS source and does play it back as 44k1 24 Bit Core Decoded PCM, so as the Mytek Brooklyn detect this stream as an MQA stream and plays it back with 44k1 24 Bit when pass through via Tidal Desktop App or Audirvana. So it is the detection within the Tidal Desktop app, that upsampled this stream to 88k2, and this is wrong.

    Juergen

    ReplyDelete
    Replies
    1. That's not necessarily a bug. The MQA core decoder has a setting controlling whether to upsample 1x (44/48k) content to 2x (88/96k). Enabling this means the caller always receives 2x rate back from the decoder, which might simplify the setup a bit. The renderer will still upsample the output to 4x or 8x with the same leaky filters, so it doesn't really matter much in the end.

      Delete
    2. Hi Mans

      Thank you for your feedback. I was skeptical what assumption is right and what is wrong. So I made some measurements comparing a song of Beyonce – Lemonade MQA Master stream.

      With Bit-Streaming Beyonce Lemonade via Roon to Meridian Explorer 2 (v1717) and to Mytek Brooklyn (v2.35), both recognize this steam as MQA (blue) and do play it back as 44k1 24 Bit MQA. The same happens, when I set in Audirvana both DACs to “MQA DACs”, so 1:1 bit streaming to these DACs and they play back as 44k1 24 Bit MQA (blue).

      And when I then set these DACs in Audirvana to “Non MQA DACs”, then Audirvana / Tidal combination does Core Decoding to 88k2 24 Bit and both DACs are then playing it back as 88k2 24 Bit PCM DACs. So far, so good, you may think, or at least I was thinking.

      But when I then look at the spectrum of the analog output, playing as 44k1 MQA DAC and compare that to the output of the Core Decoded 88k2 PCM DAC, both are very similar. Both have the 21kHz to 23kHz “gap” and both have leaked signal out above 23kHz to about 40 kHz.

      So I was wrong with my above mentioned assumption about the 44k1 MQA DAC playback compared to the Core Decoded 88k2 PCM DAC playback. The leakage of the filter used with 44k1 MQA is very similar to the filter used to Core Decode to 88k2.

      Juergen

      Delete
    3. The sample rate indicated by software players as well as MQA DACs is whatever the MQA metadata reports as the original sample rate, so you can't reliably tell from this what is actually being sent to the DAC. For the Beyonce album I believe the original rate is 44.1 kHz. Since the purpose of MQA is apparently to impose its leaky filters, there will always be some upsampling using said filters before the data hits the DAC chip.

      Delete
  9. Hi Archimago, I am positively surprised by your pseudo-MQA' impulse responses, especially MQA-4 seems almost perfect. You are correct to mention that 'although certain features may be uncovered, it doesn't mean these features are actually being used in real MQA-encoded music out there' MQA claims that for conventional 24/192 recordings, time-smear is around 100µs and that 3µs is achievable for newly recorded MQA material in a complete end-to-end system, by knowing all digital filtering processes involved in the chain. If this works, the overall impulse response should be made almost perfect. In order to achieve this, MQA claim that the total impulse-response duration is reduced from around 500µs for a standard 24/192 to about 50µs , and transients comes down to just 4µs.
    What intrigues me most, is how MQA establishes a time smear reduction to 10µs and how this can be can be verified by measurements. This 10µs of time smear is roughly equivalent to that of sound waves passing through 10 meters of air. So are there microphones and measuring devices available which can detect such a performance by measuring in air?

    ReplyDelete
    Replies
    1. No, sound travels 3.4mm in 10uS (340 m/s)

      Delete
    2. The speed of sound is irrelevant since it is very close to constant for audio frequencies.

      The attenuation of sound in air increases with frequency. Perhaps this low-pass effect is what Stuart has in mind. I haven't seen anything beyond vague references to "blurring" so it's hard to tell.

      Delete
    3. Thanks for the note Pedro,

      As Mans noted, it's really hard to know what MQA is referring to with the claims about "(de)blurring". I think it would be interesting if they provided more actual information about what they're specifically addressing with these time domain claims.

      Unless we know the metrics they're targeting, it's hard to understand what issue is being addressed and of course no real way to verify if the engineering goals are achieved.

      Delete
    4. Thanks Mans and Archimago for your replies. Besides the backward engineering efforts, I suppose it is of importance to search for methods how both 'blurring'of sound will occur due to A/D errors as well how this can be 'de-blurred' It would be nice if MQA would provide some exaggerated 'blurred' sound examples which are clearly audible. If the algorithm is capable to correct even heavily damaged material, it might be easier to understand the importance of time-smear reduction.. Is measuring end-to-end and listening to improved timing a paradigm shift and are we indeed too much focused to the frequency domain or not?

      Delete
  10. Is it me (as in I am a big dummy) or is all this MQA and fancy high bit depth features irrelevant as streaming (quality) really depends on the type/speed of internet/wi-fi connection?

    ReplyDelete
    Replies
    1. Ricochet,
      So long as the Internet speed is capable of keeping up with the audio data stream, we could theoretically send/receive at whatever lossless bit-depth and samplerate. 32/768 over a fibre data connection? No problem...

      The issue is of course whether we can hear the difference. As I have expressed in the past:
      http://archimago.blogspot.ca/2014/03/musings-high-resolution-audio.html

      Hi-res audio really doesn't sound significantly better (otherwise it would have taken the world by storm). And for MQA, we have lossy reconstruction plus meddling with the actual bit depth since it's not "unmolested" full 24-bit signal any more.

      As time goes on and more information is revealed about MQA, it's getting hard to figure out what specific use MQA is good for as a consumer...

      Want best sound quality with high-end hardware and beautiful sound room? Get the Studio Master.

      Want good compression for streaming? Stay with lossless 16/44.

      Pushing for what as best I can tell is quite a compromised system for "hi-res-like" audio just doesn't make sense for consumers. Ultimately it might just be about the record labels being afraid of releasing true "Studio Masters", preferring to sell the consumer this encoding system so they can hold on to the actual master file and maintaining the *mirage* of having something special to sell.

      Another way to buy the same album...

      Delete
    2. Yes...the Studio Master should be the best with nowadays technology. But why do top-rated labels like 2L and Unamas with very experienced sound-engineers who record in DSD or even in DXD 24/384 PCM still claim the MQA version sounds best..? Is this for marketing and sales reasons or is it true? Unfortunately I cannot compare the DXD or DSD files with my Unamas and 2L albums.. But MQA sounds damned impressive even at 24/192 only..

      Delete
  11. Some additional thoughts and graphs: https://www.computeraudiophile.com/forums/topic/30572-mqa-technical-analysis/?page=20

    ReplyDelete
    Replies
    1. Nice Mans! Fantastic further look at the filters and effects...

      Will get in touch soon with more :-). Currently out of town and away from my main workstation.

      Delete
  12. "Notice how MQA keeps harping on the idea of "smearing" in digital audio, and emphasizing "time domain" (While you harp on frequency response?)

    Yes I do. Like you, I expect more inaudible distortion when using minimal filtering. Why don't you measure ( if you can) the difference in transient response (precision). If the resolving of " timing information down to at least 8µs" is capable by humans, which corresponds to a frequency of 125kHz, couldn't this be a major contributor to perceived quality? " Time domain" is not fiction.

    ReplyDelete
    Replies
    1. Hi larssc,
      Do you know of any research that suggests a need for 125kHz/250kHz sample rate? Is there any evidence that whatever time domain improvements achieved by MQA overcomes the gross frequency domain distortions we're seeing even with an actual Decode and Render as in the "Simple Symphony" sample above which seems to bury actual detail in the non-MQA 24/192 playback?

      I don't think I'm "harping" on frequency response in the absence of recognition that time domain is important (especially since I've been advocating for DSP time domain improvements in speaker/room performance for a few years). Given that we know the frequency response is messy with these MQA filters, where's the evidence that MQA improves anything such that this is a fair price to pay?

      Delete
    2. Hi Archimago and Iarssc,
      What tools are available to verify the claims by MQA that the time-smear in air is as significantly reduced by a factor of 10 or not..? With nowadays DSP software tools like DIRAC, Trinov, DEQX and others, both impulse response and frequency response is being manipulated in the digital domain to a far larger extend than what MQA seems to do. Is'nt it very relevant to open up a discussion and perform real-life measurements in the analogue domain and do A/B comparisons with normal Flac and MQA encoded and decide FLAC..? If impuls response is one of the key factors of the aledged improvement, this is measures let, so let's check and measure...!

      Delete
  13. Why is MQA so secretive about it's process anyway? If it's as great as they say then they should tell the pubic exactly how it's done by publishing every detail. After all it's patented right?

    ReplyDelete
    Replies
    1. Hey Larry,
      You'd think so!

      I'm rather surprised that MQA even seems to limit significantly what they tell manufacturers that implement their tech...

      Delete
  14. Yes, I'm surprised with the limited information Gordon has been given. I imagine he signed an NDA? I wonder why they don't prohibit manufacturers from making any comments?? We can just assume they don't want anyyone to know the truth.

    ReplyDelete
    Replies
    1. Who knows Larry.

      In any event, I think MQA needs to not under-estimate the importance of the consumer's curiosity and not try to pull the wool over the eyes of audiophiles. If we read around the forums, despite glowing reviews from the audiophile press, there is clearly no strong tendency towards listeners showing a preference for the sound of the TIDAL "Master" playback. Given the findings, I think this is to be expected...

      Ultimately the consumer has the final say anyways.

      Delete
  15. Joe, great work here. And it's so nice to see comments from interested and intelligent individuals.

    At the recent LA Audio Show I got chatting with a former Meridian employee that told me about the MLP evolution/process and how Meridian kept the streaming rights when MLP became Dolby THD for Blu-ray discs. There is some original MLP DNA in the new MQA (sorry for the acronym soup!).

    As the maker of native high-resolution recordings, I see no benefit from using MLP. As for recasting the standard resolution catalogs of the major labels in MQA? That's a non starter. If you want a "high-resolution" transfer of a standard resolution Joni Mitchell recording, spend your money at HDtracks.

    MQA seems much for like an investor-funded, "audiophile tweak" initiative to create recurring revenue streams through licensing. The vast majority (95%) of music tracks will not benefit from this process or being delivered at 192/24-bits because the sources don't have ultrasonics or reasonable dynamic range.

    I've tried for years to get MQA'd versions of some of my native 96/24 PCM tracks. Robert Stuart offered to convert some and provide me with an Explorer II to decode them. Despite repeated requests (I sent them the files 3 years ago) and promises, they've never provided the examples.

    The audiophile world is full of "new and improved" gear, processes, and accessories. At this point, the problem with recording fidelity is not with the equipment but with the production process and norms of loudness etc. that are required by the major labels.

    ReplyDelete
    Replies
    1. Greetings Mark! Nice seeing you here as usual...

      Thanks for the note from the former Meridian employee. Interesting. Who knows, maybe there's some element of the MLP compression algo code still in there.

      It's unfortunate that they still have not encoded your tracks! This really is fishy and I think speaks poorly for at least some reasonable level of transparency since I'm sure you would have been happy to provide other information like microphone impulse responses, etc... In fact, I think it would have been great if along with 2L, MQA and yourself could have provided sample tracks so there's more variety than classical.

      Oh well... Although I still have questions around the quality of the lossy ultrasonic compression/decompression and whether they have implemented some extra DSP for temporal "de-blur", I think by now most interested music lovers have heard some MQA samples already.

      I agree, this appears to be essentially an attempt at creating interest in another revenue stream. I suspect the consumers are experiencing major "format fatigue" by this point.

      Great work with promoting multichannel. Audiophiles who want to hear a clear difference and IMO improvement really owe it to themselves to give multichannel at a home a try if possible rather than just focus on stereo fidelity :-).

      Delete
    2. I agree with most everything posted here. Thus not sure where the market is for MQA? Most audiophiles are skeptical and the average consumer doesn't really care.

      I'm thinking the reason they do unnecessary lossy compression is so the "MQA" process looks more complicated than it really is. ie a set of processes, rather than some DSP deblurring process that could probably probably be done with an inexpensive software plugin. But there's no money in that.

      IMO, production quality is more important than anything else. The other day, I heard the recently released Sgt. Pepper 50th album which is remixed and was blown away. We need more of this than some process that's applied to already damaged masters. Unfortunately, I like old rock, pop, blues and not really interest in demo material from audiophile labels.

      Delete
  16. This comment has been removed by the author.

    ReplyDelete
  17. This comment has been removed by a blog administrator.

    ReplyDelete