Saturday 14 February 2015

MEASUREMENTS: Ethernet Cables and Audio...





Remember folks, this is what an "ethernet frame" looks like. It's data. There is no "music" in there until decoded and prepared for the DAC! Notice the CRC bytes for error detection. (The numbers represents how many bytes for each segment.)

0. Preamble

Hey folks, over the years, I have been critical of high-end audio cables... Previously, I have shown that RCA analog interconnects can result in measurable differences with channel crosstalk changes with long lengths. But the digital interconnects themselves do not result in measurable differences even in terms of jitter (TosLink SPDIF, coaxial SPDIF, or USB). Although my HDMI receiver's DAC isn't as accurate or jitter-free, different HDMI cables don't seem to make any measurable difference either. The only caveat to this being that a digital cable can just plain fail, in which case the distortion to the audio signal has a particular (annoying) characteristic which is clearly audible and not a subtle change (eg. a poor USB cable sound).

So far, I have not seen any further measurements to suggest my conclusions are inaccurate. I have seen audiophile reviewers and forum posters still claim digital cables make an audible difference and when questioned they provide lots of words but no actual empirical evidence. It has been awhile since I've seen any articles claiming objective evidence for cable measurements - haven't come across new ads or audiophile articles although of course I may have missed some.

However, as computer audio expands, there will be opportunities to "brand" more hardware as somehow "audiophile approved" and companies that make audio cables likewise will naturally capitalize on new lines of interconnects / cables... And as expected, cost of these things will be commensurate with "premium" products.

Which brings us to the concept of "audiophile ethernet cables" (see here also, and recent mainstream press exposure of the "madness"). Let me be clear. If I have issues with USB cables, or SPDIF cables, making any significant contribution to audible sound quality (assuming again essentially error-free transmission of data), there is no rational explanation whatsoever that ethernet cables should make any difference. The TCP/IP protocol has error correction mechanisms that allow for worldwide transmission integrity (otherwise Internet financial transactions should be banned!),  and is asynchronous so there is no temporal dependence on exact timing mechanisms (jitter not an issue with adequate buffer to reclock and feed the DAC). So long as the "protocol stack" is functioning as it should between the devices, there will not be any issue. Systematic errors causing audible distortion either means hardware failure or poorly implemented communication software. Therefore the expectation if we were to test or "listen to" different ethernet cables is that there would be no difference.

Since I like to make sure objectively, let us at least run a few tests to see if indeed evidence can be found to support the hypothesis.


I. Test Setup

First, we must decide where to place the ethernet cables to test... You see, in any server/streamer system, we expect that there would be a few network cables in the data path. For the sake of ease in measurements and assuming the same thing as audiophile beliefs in power cables, let us place the test cables as the last leg of the data path between the streamer and the last ethernet switch (this guy also thinks the last leg is important). Here then is my setup:

Server PC <--> 6' Cat 6 UTP patch cable <--> 20' Cat-6 Gigabit generic cable (in wall) <--> NETGEAR Nighthawk R7000 router <--> 30' Cat-6A STP Gigabit cable (in wall) <--> 6' Cat 6A STP patch cable <--> Test switch <--> Test cable <--> Logitech Transporter streamer

As you can see above, if we trace the route the data takes between server and streaming device, we're usually looking at quite a bit of cable! In a typical "wired" house, much of the cable exists in the wall and would not be amendable to easy rewiring. Since I just did some renovations last year, I made sure to run high quality Cat 6A STP from router to the sound/media room. I am going to not just test a few cables, but I'm also going to try a different ethernet switch! Here are some details:

Server PC: AMD A10-5800K quad core PC, stock 3.8GHz speed, running Windows Server 2012 R2, Logitech Media Server 7.9.0 build 1420794485 [Jan 12, 2015], 16GB DDR3 RAM, built-in Realtek PCIe ASUS motherboard gigabit ethernet interface.

NETGEAR Nighthawk R7000 router: running dd-wrt.com custom firmware "kongac build 24345". Very stable with >100days uptime currently, underclocked to 800MHz just because I never needed the 1GHz speed.

Streamer/DAC device is the venerable Logitech Transporter. Remember that the Transporter only runs at 100Mbps whereas the rest of the system is capable of gigabit (1000Mbps) speeds.

The "Test switches": for the most part, I will use the inexpensive gigabit TP-LINK TL-SG1008D which I bought at a local computer store slightly more than a year ago (<$30). It's got 8 ports and fast enough for 100MB/sec (that's 100 megabytes/sec) file transfer through it from server to my HTPC:

The white thing underneath is just my LP carbon fibre brush to lift it a little to photograph the front easier. Ahem... Pardon the dust... :-)

In comparison, for a couple of the tests I will use this little guy:



A LinkPro SOHOHub 4-port 10/100Mbps switch which I believe is about 10 years old (the TC3097-8 interface controller inside came out around 1998). I found it in the attic of the house I bought, powered by a Canon AC adaptor which provided adequate juice.

For both these switches, I will keep my HTPC computer connected to one of the other ports.

The "Test cables":



So, I rummaged around my pile of computer parts and found these cables to test. Note that I was shopping at NCIX.com when I was doing some renovations and getting my network system up. I "standardized" on some rather inexpensive Cat 6A cables on sale there - hence the nGear brand which they carried.

The top picture, from the left we have a 1-foot length of Cat 6A STP (<$3.00) - presumably the "best" cable given the short length and excellent shielding. Note that Shielded Twisted Pair (STP) cables are not necessarily better than UTP (Unshielded...); one must make sure the shield is properly connected at each end. Next we have presumably the "worst" cable of the bunch - a generic "freebie" 3-foot length of Cat 5E UTP patch cable that has been sitting around for the last 5 years in my pile of parts. The blue plastic jacket is loose and quality so flimsy that I can probably pull it apart easily without much strength needed. Then we have a 10-foot length of Cat 6A (<$6.00), and finally, a much longer 50-feet length of Cat 6A STP (~$15.00). Cables from the same brand will allow us to see if length makes a difference.

The green cable in the lower picture was one I found in my office. It's a 12-year old 20-feet generic Cat 6 UTP cable that has been in daily use for the last 12 years... I guess you can call it "burned in"!

Sorry folks, I don't have any Cat 7 cables here. At this point, I don't see any reason to use these since I'm only running a 1 Gbps network. Anyone out there running a 10 Gbps network at home requiring Cat 7 cables? Realize that even Cat 6 is potentially capable of 10 Gbps up to 50m (>160 feet) or so.

I will measure with RightMark (newest 6.4.1) to look at the usual dynamic range, noise floor, distortion along with the Dunn J-Test signal to see if there's any evidence of jitter anomaly in the Transporter's RCA DAC output (rather than the XLR for the sake of convenience). Some well shielded 6' RadioShack interconnects used (Cable C here). As usual, my E-MU 0404USB device was used for the measurements. All measurements done in 24/96 (high resolution) or 24/48 for the jitter test.

Let the measurements begin...

II. RightMark Audio Analyzer (24/96)

Here's the summary table of all results with 5 cables with the TP-LINK gigabit switch and 2 other measurements with the old 100Mbps LinkPro switch:

As you can see, there are no significant differences in the audio output at all. Analogue output was measured all the way to 48kHz - well beyond the audible spectrum. It didn't matter whether the cable was 1-foot all the way to 50-feet. Likewise, Cat 5E, Cat 6, Cat 6A, UTP or STP made no difference whatsoever. In 2 of the tests (50' CAT 6A & 3' CAT 5E + LinkPro), I was playing 20Mbps 1080P MKV video concurrently on the HTPC connected to the switch to increase the data rate coming from the server - no difference in background noise or anything else.

A few graphs from which the calculated data were derived:
Frequency Response: Exact overlay.
Noise level: slight 60Hz hum measured down at -115dB, everything else even further below this.

IMD: Again, essentially perfect overlay with the different cables.
Stereo Crosstalk: Would be very bizarre to see any anomaly here!

III. J-Test (24-bit)

Instead of showing 7 individual J-Test graphs, I decided to overlay each one to create a composite image:


As you can see, there is some normal variability in the noise floor around the 12kHz primary frequency but otherwise, nothing sticks out. There's some low-level jitter around 12kHz, some of which I'm sure related to the E-MU device itself rather than just the Transporter.

No evidence that any of the cables / switch changes resulted in any anomaly using the 24-bit Dunn jitter test. None of the sidebands exceeded -110dB from the primary frequency peak at 12kHz. Note that the peak itself is at -3dBFS, but I measured it a bit lower to avoid the use of the E-MU's input amplifier which would add some noise. Again, no change observed (ie. worsening of noise floor or stimulated jitter sidebands) even when the HTPC was concurrently streaming a 20Mbps movie from the server.

IV. Summary / Conclusion

I believe if there indeed is an ethernet audio device that "sounds different" because of different cables being used, then that device should be returned because it is obviously defective. Remember folks, it is like accepting that the earth is spherical or that 2+2=4 - because that's just the way it is. Ethernet communication is an engineered system, the parameters and capabilities of this system is not only understood but designed to be the way it is by humans! You really cannot claim to have "discovered" some combination of dielectric or conductor or geometry that works "better" within an already errorless digital system unless you're claiming improved performance outside technical recommendations (in the case of Cat 6 for gigabit networks, it's 100m or 328 feet lengths within a reasonable ambient electrical noise environment).

It's also worth remembering that audio data bitrates are quite low. Today, I hope nobody is running anything slower than 100Mbps "fast ethernet". Although my music is generally streamed out as compressed FLAC, even if you stream uncompressed WAV files, standard stereo 16/44 CD-quality audio requires <1.5Mbps, 24/96 requires ~4.6Mbps, and stereo 24/192 ~9.2Mbps. Even if we went uncompressed multichannel, 5.1 24/96 would only use up <14Mbps.  Considering how cheap gigabit (1000Mbps) networks are, there's no reason not to build upon the gigabit standard these days. There's generally no reason to complain about decent Cat 5E cabling, but splurging a little on Cat 6+ isn't a big deal. The Transporter device used in these tests is almost 10 years old at this point and limited to 100Mbps. I would certainly be surprised and disappointed if a modern audio streaming device measured differently with various cables these days with even faster ethernet interface hardware!

Ultimately, I'm not suggesting anyone use the cheapest ethernet cable he/she can find. If you like the esthetics and build construction, go for it! Just realize that it's essentially impossible to argue that a functioning (free of data transmission error) ethernet cable will "sound" any different or worthy of significant cost differential based on sonic quality. The idea of specialized "audiophile" ethernet cables (or "ethernet switches" for that matter) is plain nonsense.

For the record, subjectively, I have never heard a difference between ethernet cables on my system. For fun I did have a listen to Shelby Lynne's Just A Little Lovin' (2012 Analogue Productions SACD ripped to 24/88) - sounded great to me even with the Cat 5E freebie cable and cheap LinkPro switch while a 20Mbps movie was playing off my HTPC. I have never tried those expensive cables from AudioQuest or Chord, but seriously, why bother when there's no logical rationale based on understanding of an engineered product and the lack of empirical evidence? Must a person try out or demo every claim made or testimonial uttered when some things are self-evident? Must I double check when someone comes up to me and tells me the world is flat or the sun rises in the west? Should I also try Snake Oil if someone in a crowd around the traveling salesman yelled out that it "Works for me!" without any other evidence?

Well, it looks like Chord got their hands slapped for claims about sound quality with their ethernet cable ads determined to be "misleading advertising", lacking in "substantiation", and "exaggeration" in November 2014. Bravo to the UK's Advertising Standards Authority. Truth is important.

Bottom line: There's no evidence that any of the digital cables make an audible difference be it TosLink, coaxial, USB, or now ethernet within an error-free system.**

As usual, if anyone feels that I am in error, please demonstrate and leave a link to the evidence.

Okay... Now I really have to go do some real work :-). Enjoy the music!

-----------

** I was reminded about this post I made using the Squeezebox Touch and EDO plug-in awhile back. In it, I was able to demonstrate measurable differences using an unshielded cheap 3' "zip-chord" RCA cable instead of a proper coaxial cable (I'm sure it's nothing close to the 75-ohm impedance spec). It is a reminder that we of course should be using *proper* cabling and that extreme situations like in this post will allow demonstration of noise phenomena that otherwise would be highly unlikely. Notice also how this poor RCA cable degraded sound quality when pushed to 24/192 which is also outside the usual Squeezebox Touch specification but available thanks to the Triode plugin.

56 comments:

  1. Great test, but wouldn't it be sufficient to merely send data and compare what you send with what you receive to make sure it's the same? Any "noise" on a cable is not going to affect the actual data, because that noise will be ignored at the receiving end, right?

    ReplyDelete
    Replies
    1. Yes, indeed any low-level noise will not corrupt data.

      But some folks still have this belief that the noise carried within the ethernet cable itself can creep into the DAC and cause noise floor issues (presumably). I wanted to demonstrate that with what I believe is a reasonable high-resolution streaming solution (Transporter playing 24/96 with the ethernet cables directly plugged into the device), these concerns were unfounded.

      Furthermore, whatever concerns people may have about long cable lengths (like 50-feet), I could not detect any difference either...

      Delete
    2. What about noise that's coming from another piece of equipment that you have in your system that they didn't have when they conducted the tests? Example. I've seen data centers with very good cable management where data cables were kept away from power cables. I've been in data centers where they didn't. Couldn't the noise that comes from power cables cause problems for data cables if they are right next to each other? I would conclude that there is a likeliness that this may be a problem, especially if the data cables they are using don't have good shielding. Why? I've seen problems with cheap cables being used in environments where there isn't good cable management. Did these tests try to test for this? Not that I can see.

      Are these the ONLY tests in the world that cable mfg use? I don't know. Did they test every conceivable cable with every conceivable test? NO. I would think that a cable with higher purity metalurgy, good shielding, good termination and precision winding, dielectrics, etc. etc. will perform better than just your garden variety El Cheapo cable. When it comes to audio, there are a lot of factors that aren't always considered. even listening tests aren't standardized. Just by listening to a recording that has limiting and audio compression is not going to allow the listener to hear everything because of the way it's been mastered. If you conduct listening tests, at least use recordings where there is no limiting and audio compression and hopefully with acoustic instruments rather than electronic. Listening to and understanding how to listen for harmonic structure, low level detail are just two things to listen for. If you aren't adept at being able to discern subtle details, then that is a limitation you have. Not everyone is trained in how to listen to audio. Most people don't have the skill set, some people do.

      One suggestion, download the How To Listen app from Harmon International, use high quality recordings that have no audio compression or limiting and use them instead of the audio tracks they use and take the tests. The app does two things. It tests you on your abilities to hear differences in audio, but it can also be used to train you in being able to pass the tests. It was designed by a PhD that has experience in this area and it's what they mandate that all of their listening subjects have to pass this test and then they have to be retested at Harmon Listening labs to be able to allow someone to be a "listener" for their products. It's a great way to improve one's listening abilities.

      Just make sure you have a quiet environment and spend about 1 hour a day for many months to complete and pass this training. It starts out easy, but gets progressively harder, but the end result is that you should be able to have better listening skill set.

      http://harmanhowtolisten.blogspot.com

      If you know any other training/test apps that do something similar, let me know.

      Delete
  2. What intrigues me about the idea that digital cables sound different, is the way in which those differences are described, by those who hear them, in audiophile terms. Quite how data can be re-arranged in a cable such that 'the soundstage can be opened up' or 'percussion sounds more lifelike', or whatever else is heard, is quite amazing. As one of the poster's in a link above put it, "perhaps the 0's are more rounded and the 1's are straighter".

    As an observation, your test didn't include any 'audiophile grade' Ethernet cable. I fear that those whose 0's are fuller, or 1's straighter, will not be dissuaded by your analysis. A good starting point though.

    ReplyDelete
    Replies
    1. Hey, maybe one day I will pop over to the local audiophile store and borrow some AudioQuests to try!

      Although I have measured Synergistic power cables in the past, I generally will try to avoid specific brand name measurements because I actually am a reasonably nice guy :-). My point is not to "shame" any specific company unless there's stuff out there already (like the Chord UK ASA ruling). I hope audiophiles will think about what I write and I hope I can educate those who may not have much experience with the technical material (which is OK!).

      Now as to the subjective descriptions... That of course opens up the whole field of psychoacoustics and the importance of controlled conditions when doing subjective evaluation! It doesn't help of course when some audiophiles feel controlled tests like DBT's are not valid...

      As for "rounded" 0 and "straighter" 1's, here is a beautiful example of analogue thinking in digital *data* transmission! Obviously the writer does not understand the technology to speak in this fashion. Overly "round" 1's or overly "straight" 0's would at some point hit a threshold and be misinterpreted leading to data error. We have no evidence anything like this is happening. I therefore cannot take this writer seriously when it comes to technical commentary.

      Delete
    2. One more comment about what was raised...

      For a moment, suppose I did measure AudioQuest ethernet cables and imagine if (most likely) they did not make any difference objectively. What happens when the Super-Duper-Audio releases their Cat 8 ethernet cable with silver conductors, Ultimate (TM) shielding, and Mega-Hyper-Tesseract (TM) geometry? Do we need to also measure those before we can dissuade certain audiophile beliefs of the "faithful"? IMO, obviously that would be perverse. It's more important to educate folks to understand how things work and extrapolate based on understanding, developing critical ability in the process. This IMO is the higher calling of those who call themselves "journalists" in the audiophile press and formal blogs rather than ending up being the cheerleaders. And ultimately the onus of proof should be on Super-Duper-Audio to show the audio consumer why their product is better and worthy of whatever price the item may "command".

      Delete
  3. "I was able to demonstrate measurable differences using an unshielded cheap 3' "zip-chord" RCA cable instead of a proper coaxial cable (I'm sure it's nothing close to the 75-ohm impedance spec)"...

    If I understand this correctly, the RCA cable was being used as an S/PDIF link..?

    If so, I think there is a crucial difference between this and the ethernet case. With S/PDIF it is not strictly true that "bits are bits", because the DAC is slaved to the data stream, and if this is noisy and jittery then it will not be able to do this as accurately as it would with a clean pulse stream.This would also be true with isochronous USB. Effectively the DAC attenuates the jitter, but it is constantly making small adjustments to keep its buffer full to a uniform level (or it resamples the incoming data to match its own sample rate, but it is the same problem of establishing the incoming sample rate from noisy edges).

    In asynchronous transfers the sample rate is defined by the DAC's own clean crystal clock, and the computer is slaved to this sample rate and delivers the data in non-critically-timed packets. In this case, as long as the bits make it across the link then bits truly are bits, and jitter is immaterial.

    (Sorry if I'm stating the obvious, but I wasn't sure that everyone would be aware of the difference).

    ReplyDelete
    Replies
    1. Yes. You are correct. That is exactly what I did; using an inappropriate unshielded "freebie" RCA (stereo) cable in place of a proper coaxial SPDIF cable.

      That is why the noise was a problem and likely some data error as well from the interference. However, jitter per se was not a problem as far as I can tell. I do not believe jitter can be affected by the cable; rather it's a property of the DAC circuitry and interface architecture (ie. this is why TosLink tends to me more jittery than coaxial - not always).

      Delete
    2. The crucial point, though, is that with S/PDIF and asynchronous USB it is not just a question of data, but of timing, and the timing is affected (however slightly) by noise and the quality of the cable. No doubt various DACs deal with this in their own ways. The best description I have read of how it might cause a problem is this one, starting at page 6:

      http://www.thewelltemperedcomputer.com/Lib/Hitoshi%20Kondoh%20story.pdf

      It tells us that the DAC must adjust its sample rate to match the S/PDIF stream and it does this using a PLL. The PLL compares a divided-down version of the DAC's sample rate to the arrival time of the data, and makes adjustments to its own sample rate accordingly. Clearly the exact "arrival time of the data" can only be based on the timing of a digital edge, and any noise on this edge constitutes jitter. Therefore the DAC can only attenuate jitter, not eliminate it. Depending on the size of the buffer, the acceptable latency and so on, maybe the DAC only makes a tiny adjustment every ten minutes, but it has to make adjustments nevertheless, and these create a minute amount of noise and distortion. So in this case it is not true to say that "bits are bits".

      If the link is merely being used to transfer a file, or the protocol is asynchronous (the link is bidirectional, the source is slaved to the DAC, and the DAC sends requests for packets of data) then bits *are* bits, but in the general case of real time streaming to a DAC using 'unidirectional' protocols that include S/PDIF, then jitter can get through even if it is effectively attenuated massively.

      Delete
    3. Timing IMO is over-rated :-)

      Until someone is able to demonstrate that there actually are timing irregularities which would cause a significant change in the characteristics (like some kind of actual jitter test coming from a reasonable DAC), I think there are no worries. Just theoretical possibilities. I have seen the jitter anomalies from DACs and source players, but I don't think cables are cause for concern...

      Delete
    4. Timing is overrated? OMG, I can't believe someone said that about digital audio. Timing is one of the single most critical aspects of streaming audio over a digital cable.

      Go to one of these high end cable mfg and sit down with the cable engineers and discuss this with them in person and maybe they'll show you the tests they perform. You may not even have the proper equipment to perform such tests.

      I can't even have USB hard drives on the same bus as my USB cable that goes to the DAC. Why? it can cause clicks, etc. when there is a lot of data from the hard drive being transferred when I'm playing a track from my computer. That's why any external data storage has to be on a completely different bus. I have had to switch to Thunderbolt storage devices, and happily so. Much faster and more reliable than USB for data storage.

      Delete
  4. May I say that I find your analysis of USB, Ethernet and other digital cables absolutely fascinating. I do agree with the fact that data is data is data and that unless it gets corrupted somewhere down the transmission chain, it shouldn't "sound" different at the end of of the chain. My computer science courses showed that to me a long time ago. However I am also one of those weak guys that gets easily confused by all that talk about noise, interference and other "things" that appear out of nowhere and I actually fear that they will play tricks with my hearing. In short I'm afraid to trust my ears. I suppose there is a strong subjective argument that will always contradict the objective measurements. So I regularly buy expensive USB cables (you know, those that separate the power from the data, those that are cryonically treated, those that look good, aren't they supposed to play better...). Can I trust that you will keep on reminding me of these elemental truths as you do, without resorting to heated debate or insults?
    In the meantime, thank you for the excellent reading.
    Marc Lemaire

    ReplyDelete
    Replies
    1. Thanks for the note Marc.

      I think the fact that you can say "In short I'm afraid to trust my ear." is a testament to honesty and recognition that it's *really hard* to remember the experience and be able to compare and contrast what we hear moment-to-moment. Especially in the realm of hi-fi audio when the resolution is already so good and we're trying to adjudicate for tiny if even perceptible differences!

      When the difference is so small, I'm happy to defer to objective evaluation... My ear/mind can have the "final say" of course when it comes to whether I want to spend the money to satisfy emotional "passion", but I can recognize that whether I spend the money or not, it's just not going to make a substantial difference either way in terms of the physical sonic output (in the case of cables like this).

      I'll try my best to highlight the "elemental truths" as the opportunity arises. And of course there's no point getting angry in debates and insults. We can all be gentlemanly in our conduct and point out disagreements without ad hominem attacks and name calling.

      Delete
  5. I'm one of those people who has heard a clear difference between ethernet cables. I have no theory as to why there was a difference. Actually, let me be more precise: I replaced an ordinary ethernet cable in my system with a $30 Audioquest, and I heard a clear and obvious difference. I remain skeptical that replacing that one with a $500 cable would make a difference. I believed what you believe before the test, and I am inclined to believe it still. The fact remains however that I heard a difference sufficient to convince me that I was not experiencing psychoacoustic trickery.

    A comment regarding double blind testing, I am willing to bet you could tell an apple from an orange without a DBT. I'll even bet you could tell a Granny Smith from a Macintosh without a DBT. And so I will assert that in discerning differences between any 2 items, there is a continuum of possibilities, from differences are so great that they are obvious to there is no damn difference at all. Sometimes there really is no difference at all. Sometimes it's somewhere in between. And sometimes things are close. At these times the mind may play tricks - Psychoacoustics is a real phenomenon. So sure, there is a place in the world for DBTs.

    But sometimes it's apples and oranges.

    But like I said, I'm skeptical still, notwithstanding my experience to the contrary. So I will explain my setup in excruciating detail, and you can tell me why I heard what I heard. I am nothing if not open-minded.

    My music files are stored on an old Dell server, running Windows XP. I put a couple of 2TB drives in it, system is on a different drive, but otherwise it's a 12 year old computer. The gigabit ethernet port is on the motherboard.
    A 3 foot ethernet cable connects the computer to an ordinary 8 port switch. Another 10 foot ethernet cable connects the switch to the computer in my audio rack.
    That computer was built to Computer Audiophile's first recipe a few years ago. Solid state drive, fanless. Motherboard based ethernet port. Relatively early Atom processor.
    That computer has an Asus Xonar Essence ST Sound card. A SPDIF cable connects to my DAC
    The DAC is a Bel Canto 1.5

    One day I succumbed to marketing and decided to blow $30 on a cable test. For the cost of a nice restaurant lunch I figured what the heck. So I replaced the 3 foot cable with the Audioquest, and listened. The difference was obvious, I thought. After a couple minutes I switched back. And then back to the pricey one. And it stayed.

    So I sprung for another $50 for a 10 footer. Using this one resulted in more improvement.

    So, in your system there may be no difference. In mine there was. There was no evidence outside of this experiment to suggest any piece of equipment was defective. The computers communicate perfectly adequately with their original cheap ethernet cables. Music sounded very good through this system. It just sounded better with the upgraded ethernet cables.

    If you can explain the phenomenon, I'm all ears.

    ReplyDelete
    Replies
    1. Only YOU will be able to explain the phenomenon, others can not.

      IF you really want to find out why, repeat your test (that should be really easy to pass) but do it this way:
      Have someone ELSE swap the cables WITHOUT you knowing which cable is in there (no cheating !).
      Swap at least 10 times and randomly so 3x the same cable in a row is possible.
      You make notes and the swapper makes notes, compare notes afterwards.

      IF you come close to 100% you can still hear obvious differences than contact some hardcore objectivists at Hydrogen for instance, invite them over and prove to the 'skeptics' how wrong they are.
      That'll teach them ...

      Delete
    2. Thank you for the details Seamusok. Indeed, this is a mystery and I'm glad you are enjoying the system.

      As you say, one does not need DBT to prove to oneself differences which are obvious - like apples and oranges or even types of apples. Those things are obvious in easily perceptible ways (by taste). Indeed, I can easily hear differences between my father's Klipsh Epic speakers compared to my Paradigm S8's without DBT either. However, if I did do a SBT on those things, I should easily pass!

      Would you easily pass the SBT test with the ethernet cables? If you can I'd love to know and maybe we can explore in more detail what made the difference and why... But if not, that would be a great opportunity as well to ask why. To this point in time I have not heard of anyone passing a good SBT/DBT test purely with change in ethernet cables.

      Delete
    3. Well this was my point: there was sufficient difference that I think DBT would be unnecessary. If I can make a wine analogy. it was not telling one winery's Pinot Noir from another winery's Pinot Noir. It was telling Cabernet from Pinot Noir.

      No Solderdude, if you don't believe me, you tell me what's going on. If your only answer is psychoacoustics, then you don't have an argument. You have close minded refusal to consider alternate theory. It is the opposite of science. Science is based in a curiosity of how things work, and is open to the possibility that new theories can replace old ones. I refer you to Thomas Kuhn, "The Structure of Scientific Revolutions", 1962.

      The fact is many, including me, have heard significant differences. (I do think some of those people are fooling themselves.) I believe it is a function of specific circumstances. Maybe it's because I'm feeding SPDIF into my DAC. Maybe the phenomenon is less obvious or non-existent if the DAC is fed through asynchronous USB. I don't know. This is why I ask.

      Delete
    4. I have no issues with people that can hear differences between Ethernet cables. My main argument is why? Why would an Ethernet cable change the sound of a given system? The companies behind these cables (in this case, AudioQuest) refuse to submit any logical and scientific information. What makes one Etehrnet cable better than the other? How do you control "Soundstage" and "Dept" of the musical presentation through an Ethernet cable? many questions but no answers....

      Delete
    5. seamusok, how can we exclude the possibility that you are fooling yourself?

      Do a proper blind test. Once that is successful we can investigate further, for example what piece of your equipment is broken.

      I do not understand why you call Solderdude closed-minded. He is not. If anything, you are gullible.
      Where is your "new theory" and where is the evidence supporting it?

      Delete
    6. While I am all for a proper blind test, another approach is to measure the waveform difference between the two Ethernet cables, sometimes called a null test. Simply, if you are hearing an audible difference then the waveform must have changed, and if the waveform has changed, the absolute difference can be measured.

      Here is an example of comparing two different software music players (and two different USB cables) using this differencing technique: JRiver Mac vs JRiver Windows Sound Quality Comparison

      Another example showing how this differencing technique is capable of measuring below our ear/brain auditory threshold: Fun With Digital Audio – Bit Perfect Audibility Testing

      Archimago has used a similar differencing technique using Audio DiffMaker in several of his articles. I have used DiffMaker as well and we both independently ended up with virtually identical results using either DiffMaker or the more manual approach of editing two sets of recorded tracks, inverting one and mixing them together.

      The difficulty is you may not have the gear or desire (or even feasible with this kind of setup) to perform such a test, but it would without a doubt prove if there is a measured difference or not.

      A half-way point is if you could record music via digital loopback or at the analog outputs, once with one Ethernet cable and then repeat with the only change swapping the Ethernet cable and post the two recorded files. Then others can a) ABX the files in Foobar or b) edit the samples to be the same start and end points and perform a difference test. Then we would all know for sure one way or another. Want to give it a go?

      Delete
    7. We could spend the rest of our lives doing blind tests in order to avoid being called closed minded over magic crystals etc. but in this case we don't need to. As Archimago points out in his article, digital audio is an entirely man-made system that was designed specifically so that as long as the numbers get through, then the system has worked *by definition*. For this not to work is like saying that it is possible for 2+2 not to equal 4, or for a GBP not to equal to 100 pence. In other words, impossible by definition.

      If, beyond the issue of the data getting through 100% perfectly, there are electrical issues that are influenced by the cable (e.g. hum loop), there are two answers:
      1. Such problems can be measured. It is not like measurements of amplifiers and speakers where (as with listening tests) people can object to what is used as the stimulus and what parameters are noticed vs. which are ignored. In this case there is no mystery: such problems would show up as noise and/or distortion. If the data got through 100% and the analogue measurements are identical there is no difference.
      2. Attempting to fix a defective design by use of a special cable is a road to nowhere. If you can hear and/or measure a difference there is something wrong with your electronics somewhere. Fix the electronics so that the price of the cable has no effect, as intended.

      Delete
    8. Long read with some technical blabla ahead ... skip if not interested.

      The argument the open-minded audiophiles pop up isn't that the data changes. All of them know by now that it isn't toppled bits as that would result in other things than degradation.
      What is claimed is that noise on the data lines, power supply lines (USB), ground lines and or clock lines (if applicable) influences the decision point of the electronics (reciever end) that is connected to the infernal cables.
      We have to differentiate between 2 types of 'noise'.
      1: differential mode, which is an unwanted signal variation between 2 (or more) wires in one (or more if applicable) cable(s). For USB for instance a 'noisy' 5V supply. That noise can/will/is generated by the 5V power supply of the PCB that with its long wires and thin PCB traces on the MoBo will vary in value dependent on currents drawn. For the proper operation of the PC this doesn't matter at all as long as the 5V doesn't drop too much (think less than 1V).
      In case of an USB connection that may be problematic if the average voltage or dips are lower than the voltage that is needed for the receiver chip needs.
      Don't worry too much these have a wide range and have regulators on board as well.

      2: differential mode: this is noise that is present on all wires at the same time. This noise thus isn't present between the wires in the cable and to this point can do no harm. Those signals can range from the audible range to several GigaHertz and are present between the 'equipment' and the actual ground we walk on.
      The 'impedance' between the noise source and ground can be between 40 and 100 Ohm or so.
      You do not even have to have a connection the safety ground to create a current.
      Standing on the ground even with insulating shoes is 'enough' for very high frequencies.

      The 'theory' (that still is not proven but embraced by open-minded people) is that any of these noises be them common or differential will affect the timing of a '0' and '1' decision.
      This is because digital signals in the real world (and especially in cables which are relatively high in capacitance) do not change from 0 to 1 and vice versa instantly. They 'slope', ring (sinewave decaying over time), reflect on improper impedances etc.
      When these problems become too big bits will topple so... the open-minded people know it isn't that bad but it may be present none-the-less and shift the decision point in time.
      Without drawings a bit hard to explain the above though.
      In any case the bits are not switching exactly on time on the receiving end.
      There is only 1 word for this phenomenon and that's JITTER (everybody runs away scared now !)
      Jitter is NOT a problem for data's value (0 or 1) is NOT determined on the edge of a signal but between the edges of a digital signal.
      That jitter is thus removed from the data the moment it is stored somewhere to be processed.
      Think ethernet and USB, SPDIF other digital links.
      The jitter is thus 'removed' at that point.

      sorry have to divide my rant in 2 posts... bloody limits !

      Delete
    9. In a DAC the 'digital rules' change the moment we reach conversion point in the actual DAC chip.
      At this point its not the data that's important (it may even jitter considerably as it is clocked anyway) but the CLOCK is important as well as the immunity of the conversion part + analog parts for differential noise on its power supply.
      Noise on the clock and power supply may result in a (verrrrrry slightly) different analog signal LEVEL.
      NULL TESTING won't show this that well simply because these differences are sooooo small that the differences between the sample frequencies of the DAC and ADC of the 'recording' device are free running and cause BIGGER differences.
      We (the closed-minded people) do not care about such small differences in amplitude because we KNOW that these differences are too small to be audible.
      The 'open-minded' crowd 'suspects that maybe it could be that the closed-minded folks have it wrong because they don't WANT to hear, have crappy ears, different gears whatever they can come up with without knowing HOW things work.

      That clock signal is important because unless A-sync USB is used the clock always has to be derived from incoming signal. SPDIF for instance the clock is derived from the (jittery) data as well as in normal USB. That derived clock is usually 'filtered' so it becomes more constant (jitter removal, often with PLL or something effectively similar) still that clock will vary slightly.
      This too is JITTER (run everybody ! nearest door).
      Again this can be measured and quantified and chip makers go through much effort to lower this.
      This will ONLY result in 'timing' being off and will result in a 'different' voltage.
      That will only be slightly different though and close-minded people will be confident it is below audible limits.
      Again the open-minded crowd disagrees but only have their (flawed and often compromised) hearing as tool and reject measurements as they don't show what they perceive.

      Sorry ... turns out in 3 posts !!
      Talk about a long and pointless story that won't 'convert' any body.
      Why bother anyway ... well some people may find it interesting.

      Delete
    10. Now the current theory is that the 'noise' in the data (which is de-jittered all the time during processing) can STILL reach the actual conversion stage and cause timing issues there through bad PCB layout, wiring layout etc. and thus more or less 'bypass' all the de-jitter circuits and even reach the analog part and amplifiers behind it.

      Well, working in that field occasionally I know that this CAN be problematic IF the PCB (printed circuit board) is NOT properly designed. The ground plane and data lines specifically. Also some chips are more sensitive than others.
      So did they actually find something to slap the dog with ?
      NO in my opinion... a little bit of yes because the influence MAY be there in reality resulting in (again VERRRRY slightly) different analog output voltages.

      I still say NO (and maybe yes) because of one simple reason...
      The RMAA and jitter measurements as well as nulltests are all done in the analog plane and often with ground loops and lots of wiring.
      IF these show no differences than the 'fear' is unwarranted because it IS NOT there.
      Still the 'open-minded' ones are free to believe it IS a problem because their perception and 'knowledge through experience' tells them so.

      I still say a small 'YES' because of Seamusok's suspicion that it may be equipment and conditions dependent.
      A very small yes IF they have a crappy DAC design they may be actually being bothered by it BUT from my POV it will still be below the audible limits of the skeptics (close-minded people seem to have poor hearing or are plaqued by expectation bias and 'refuse' to hear it mentally)

      In that case (as XNor and the rationalaudiophile said) they have a faulty component (a noise sensitive DAC) and should replace that.

      In my experience (and still have to be proven wrong) ANY sighted test is flawed and BLIND test (they really DON'T have to be double blind !) are more conclusive but more exhausting and demotivating/saddening as open-minded people don't seem to be able to tell the obvious differences any more UNLESS something is really wrong and in that case are VERY measurable already. BUT I am closed-minded and thus are not open to alternate theories and the long held belief that the 'skeptics' can't measure properly yet. So be it... I call this religion, no a scary cult like religion based on lack of knowledge.

      And NO .. despite what people CLAIM to hear ... ethernet cables cannot create those differences (closed-minded isn't it ?).
      It's the SAME differences people describe when using magic stones, analog cables, mains cables, and all other kinds of audiofoolery.
      THAT should tell them something BUT it doesn't, instead all of this 'works' because open-minded people are CONVINCED the(ir) hearing is so much more accurate than any measurements, measurements FAIL and hearing does PREVAIL.


      So to all closed-minded people I say ... enjoy the music.
      To all copen-minded people I say .... TRY to enjoy the music and TRY to realise you hearing is a very POOR (technical) analyser.
      P.S. sorry for some typos here and there and use of language (not Native English speaking)

      Delete
    11. Errata:

      In the first of the 3 posts bhind the number 2: it should have said 'common mode' instead of 'differential mode'.

      And sorry for the long sentences...

      Delete
    12. One suggestion is the Ethernet cable you swapped out may not pass Ethernet spec. Two it's all placebo affect.

      The thing to realize is that when you are using an Ethernet connected DAC you left the realm of real-time. Data is buffered and played back out of buffer. Any jitter on the Ethernet cables matters for nothing unless the buffer under runs. Then you get skipped playback.

      Check out this Youtube video demonstration:
      https://www.youtube.com/watch?v=kOp3WtOeDnE

      Please keep in mind that audio is STILL playing even though no cables are connected.

      Delete
  6. Why? I am skeptic that the Hydrogen boys would admit that they hear a difference ? I doubt that they are openminded in this kind of test.They too have expectation bias...

    ReplyDelete
    Replies
    1. Hydrogen boys don't need to hear differences at all, they only need to WITNESS someone pull this off in front of their eyes while they are present.

      I am sure they won't object to this SBT as long as they can clearly see that the test is done correctly.

      This test is all about removing the 'knowing what's being used' part....

      Delete
    2. That way would sure be VERY interesting.

      Delete
  7. I am in awe of (and slightly scared by) your willingness to spend time and energy on this stuff.

    ReplyDelete
    Replies
    1. Lazy. I guess I'm not lazy :-).

      Bottom line is that it only took <2 hours to do the test. A bit more to do the writeup of course...

      I've wasted much more than 2 hours of my life doing much less...

      Delete
  8. I am in awe of (and slightly scared by) anyone's willingness to spend time and energy listening and then describing significant sonic differences in ethernet cables .

    ReplyDelete
  9. First - I love this blog.

    Next - It's fascinating that every "belieber" of these Ethernet cables views DBTing as wasting time. Yet those same people spend time defending themselves on blogs and forums. IF only one would submit to an unbiased documented DBT, maybe all the skeptics could be put at bay? Maybe they would spend time taming the magic as opposed to scrutinizing?

    Why hasn't Audioquest faced the same legislation as Chord? Do they not make the same claims? Different jurisdiction?

    ReplyDelete

  10. Hey Archimago.
    Check this out :-) Cool adition to my new AUDIOPHILE Ethernet and USB cables ....

    http://bgr.com/2015/02/19/sony-premium-sound-memory-card/

    ********************

    And this!!??
    …ambient room temperature!? .... 2 x 2,4 m = 65.000 € …. SCARY :-)

    https://www.facebook.com/kjwestonelondon/posts/1012754585421389


    Sometimes I am ashamed to call my self audiophile :-(

    ReplyDelete
    Replies
    1. I don't think we should be ashamed of the "audiophile" label... We just need to take it back from the abyss of magical thinking and other nonsense!

      Sad to see Sony become like this. I guess it's business... They saw a niche "vulnerability" and took it.

      As for Transparent audio, I love the comment below:
      "Never thought I'd say these words "this is total over kill for my system". But I could be wrong."

      That last bit about possibly "being wrong" is the core of the neurotic tendency isn't it!? It's fine if a person wants to spend all that money on whatever they want... But I think it's a lot wiser and elegant to have some understanding about *how* the technology works and be able to judge value with confidence!

      Delete
    2. Agree with all you said. Except "I guess it's business" part.
      You were probably ironic, but it reminds me of well known gangster/machiavellian phrase: "Nothing personal, it's just business.". I'd really like to hear a few SINCERE words from Sony ENGINEERS that made this 64Gb piece of *low noise* miracle. But who ask engineers anything any more.

      Few days ago at CRO presentation I had the chance to ask DCS sales manager mr.Raveen Bawa about stock (few bucks) power and digital cables that were in the boxes with their flagship DAC/SACD player Vivaldi = 4 boxes = c/a 100kg = c/a 90.000€ !? … WTF!

      Question was something like: Are stock cables good enough for the ultimate DAC. If YES, did you do some research, and what are the results. If NOT, does he think that it is OK that customer gets inadequate cables with 90k€ DAC.

      The answer was something like: We do the tests with standard cables, but when we do listening sessions we use branded cables and concentrate on the the *synergy* (he was talking all cables not just power&digital). Vivaldi performs QUOTE: "ABSOLUTELY FANTASTIC!" with standard cables, but branded cables (he mentioned Shunyata, Transparent, Nordost & MIT) help Vivaldi QUOTE: "UNVEAL THAT EXTRA ENERGY!" ----- GO FIGURE :-)
      Then he started something about "the sound of specific brand" (what about that *synergy* you talked about?)…."they don't want to intrude their sound preference" (REALLY?)…."final price?!" (they could not squeeze good synergy brand cables in 90k€? c'mon!) …. and other political BS….

      After all he is SALES MANAGER (good one), and tomorrow he may work (lie) for Transparent, MIT, Nestle or whatever …. Maybe DCS is preparing their own line of cables…?

      To my satisfaction after the presentation, the only thing that people talked about was 2€ power cord in 90k€ DAC that works "ABSOLUTELY FANTASTIC!" I just wanted to ask mr.Bawa if he could tell me what is the name of that next level of excellency? ÃœBERABSOLUTELYFABULOUS? Didn't have the chance. Maybe you can help me? Not my native tongue.

      IMO things got a little out of control. To many sharks aiming that niche with almost no consequences. I hope that *objective tide* is coming as you mentioned in your recent posts. Your blog is certainly helping it.

      PS
      Sorry mr.Bawa, *Nothing personal...* ;-)

      Delete
  11. This comment has been removed by the author.

    ReplyDelete
  12. There's an easy way to see if any errors occur in Ethernet transmission. We will assume that if zero errors occur during transmission that some possible audio difference might be found, but if that there are no errors then the cable is the wrong place to look for the cause of any difference.

    Ethernet packets contain a checksum as you noted, and that checksum is monitored by the receiving computer. If any errors occur during transmission, the error counter will show it.

    Every operating system tracks checksum error count. On Linux open a terminal window and type "/sbin/ifconfig". On Mac OS X, open a terminal window and type "netstat -I en0". Unfortunately I don't know how to do this on Windows but I'm sure it's there somewhere or you may have to use a third party tool like wireshark.

    I stream from a Linux server to a Mac. The Mac has received 200 GB of data since its last poweron 21 days ago with no errors. The Linux server has sent 45 GB of data since its last poweron, also with no (transmission) errors. The path from server to Mac is through two gigabit switches, about 80 feet of Cat 5 (not even -e) cable, two short Cat 5e patch cables, and a patch panel. The link runs at gigabit speed.

    Next we'll be hearing that ethernet causes the sound to be cold, hard-edged error-free digital sounding and the better ethernet cables have a "warmer" sound.

    ReplyDelete
  13. A summary of how I believe more open-minded people think/feel....

    The open-minded people, with their superior hearing and more revealing gear, are like the closed-minded people convinced the differences in sound they ARE perceiving is not caused by bits toppeling over.
    Bits are bits, that is not under debate.
    It's the 'physical quality' of the electrical signal representing bits that is questioned.

    They ARE perceiving this (so it is real by definition as perception should not be doubted) as it is soooo clear and obvious that a (double) blind test isn't even needed.
    They won't even consider a blind test anyway as it is flawed by design and purpose made (by skeptics) to NOT show differences, where there clearly ARE differences acc. to their ears.
    I think most if not all open-minded people use this rationale.

    The open-minded people are open to the idea that noise and jitter which 'rides' on the electrical (and optical ?) signal, which is pure analog, is somehow 'embedded' in the digital signal or, MAYBE could (by their theories) do 'something' to the signal that is deteriorating sound.
    Closed-minded people are not able to measure this and MAY have missed it completely.

    That digital but in reality 'analog electrical signal' is going through the cables and they assume the cables are 'capable' of changing the 'digital' signal in the same way as it happens for analog systems, hence similar audible differences.

    The skeptic and closed-minded crowd don't seem interested in this to them unexplained phenomenon and refuse to take the word of the better hearing/equiped open-minded people for it. This genuinly hurts their feelings it seems.

    Closed-minded people seem to refuse to accept that open-minded people CAN actually hear beyond inherently faulty and unrevealing measurements that closed-minded and non scientific people have learned to trust.

    Cables (according to open-minded people) thus MAY possibly alter or even induce those, somehow embedded, signals which creates the exact same 'effect' that magic stones, analog cables, vinyl, tubes, mains cables, cable risers and other tweaks also are capable of 'doing'.
    Somehow the skeptics (the closed-minded people) can not measure this phenomenon with their failing and insufficient measurement methods. measurements won't show any relations with REAL world audio.

    These audible differences can easily be heard ... and they are not alone...
    Hundred others have ALSO heard the same so it cannot possibly be 'psychoacoustics' at all.
    It would be very weird if they ALL would be fooling themselves after all so that can NOT be the explanation.
    Psychoacoustic effects are thus ruled out (although some do believe some of them are fooling themselves... but certainly NOT all of them).

    As open-minded people usually seem to lack the test equipment and knowledge and feel 'ridiculed' they often resort to demanding an explanation from the 'closed-minded'.
    These heard differences ARE very real after all ... Really they are... no doubt about it.
    Why the closed-minded people are NOT willing to investigate probably baffles them.
    They see this as a lack of interest, fear of having to admit it really existed and makes them look foolish, incapable or being pseudo scientists.
    Maybe they even suspect it's a conspiracy to hide the fact that measurements do NOT say enough about real sound and complex music signals.

    It's electrical noise, jitter that ruins the sound, not changing of bits.
    Digital is in essence just another form of analog with the same flaws to them.
    It all seems to be caused by a lack of understanding how signal processing works yet they believe that engineers only THINK they know how it works but don't even know 10% of it.
    The closed minded people appear to have a great gap between the theory of operation and the practice to them.
    The designers/technical people refuse to accept that radical (and out of the box) theory.

    ReplyDelete
    Replies
    1. Regarding ethernet cables -- the topic of this blog post -- if the bits got from point A to point B without error then the cable did its job and is not the source of any audible difference. My comment above is that there are tools available to anyone to answer this question unambiguously.

      Delete
    2. I have $2000 that says you can not tell the difference with a BJC certified CAT6 cable and a $350 AQ Vodka cable.

      It's all layed out here: https://www.youtube.com/watch?v=kOp3WtOeDnE

      Let me know when you would like to do this.

      Delete
  14. That is correct and I agree.

    However, the more open-minded folks are of the opinion that sound quality is not solely determined by the 1's and 0's but it is 'affected' by 'analog artefacts' and timing related issues in the electric signal that represents those 1's and 0's.
    They strongly feel this way and are convinced the 'digital = only 1 and 0' folks are disgarding the possibility that there is more to that based on what they hear when they do their sighted tests.
    The refuse to do a blind test simply because they feel the test is flawed and the differences are soo substantial that it isn't even needed.
    The only way to convince some of those individuals is to persuade them to take a blind test and accept the outcome as valid.

    Archimago posts clear-cut evidence that 1's and 0's is all there is to it and that timing and noise is of NO concern nor influence in the signals passing through ethernet.
    Whether one accepts this evidence as real evidence is up to the reader to decide.
    Some readers disagree...

    ReplyDelete
    Replies
    1. ... my God.

      How do those people think computers can even work?

      If an analog artifact from the ethernet cable can flip a bit in the PHY compared to the desired output it's defective.

      If they're magically somehow getting in from the ethernet cable, through the PHY, and out as line-level logic "with analog artifacts", well, nothing listening to the output from the PHY (the MII chip, in other words) should care at all.

      And even more importantly, the DAC is fed bits out of RAM that are fed in from the TCP layer in software, from [handwaving away layers of stuff] a DMA write from the NIC's main controller.

      Unless those "artifacts" magically persist in RAM that is only readable and writeable as 1/0, it's flat-out impossible.

      Their strong feelings depend on them having absolutely no idea how computers work.

      Delete
  15. Hey Archimago—Lee from Ars here. Would love to talk with you further on this, and about a forum post on the same subject that shows different results from your testing (http://www.stereo.net.au/forums/index.php?/topic/82299-ethernet-cable/page-2#entry1353754). Would you be willing to send me an e-mail at lee dot hutchinson at arstechnica dot com?

    ReplyDelete
  16. Hi Lee,

    Archimago can best answer this but from what I saw they are not measuring the same thing. The Stereo.net.au looks to be measuring ambient noise in voltage.

    Achimago is showing the transfer function which is what anyone into audio reproduction should be concerned about.

    Bottom line is always this: If you get a powerful enough measurement device you will always find SOME difference. It's basically a red herring argument being made.

    The main issue is that there is no relative scale in dBu. That's a serious problem. This is a classic example of someone with a scope but without the proper knowledge to correlate it to meaningful terms.

    Another problem is that the Audioquest Pearl is referred to as CAT7. It is not a CAT7 cable. It can't be because of the termination. CAT7 cables do not by spec have a 8P8C connector. There is no such thing as a little pregnant.

    It would have been nice to see a $12 BJC CAT6E cable also tested for a true apple to apple data point.



    ReplyDelete
  17. Yes, sound was improved too by using Network cable cat5e as noted in my review, particularly the bass.

    Waiting for A&L to supply me the second review sample to confirm your experience.

    ReplyDelete
  18. 700$ ethernet cable can Solve any errors on the gramatics and orthocraps sendin emails trought them.
    Any text is more poetic and methapors are slicker.

    ReplyDelete
  19. but sound is really different, even if if you don't know why (which is interesting but secondary)

    ReplyDelete
    Replies
    1. If you insist it's "different" :-). I'm not sure I hear that!

      Delete
  20. This comment has been removed by a blog administrator.

    ReplyDelete
  21. This comment has been removed by a blog administrator.

    ReplyDelete
  22. This comment has been removed by a blog administrator.

    ReplyDelete
  23. https://blogstudiio.com/understanding-ethernet-cables-the-core-medium-for-fast-networking/

    ReplyDelete
  24. This comment has been removed by the author.

    ReplyDelete
  25. which ethernet cable is best?

    ReplyDelete
  26. When it comes to Choosing Ethernet Cable
    , there are a few different categories to consider. The most commonly used Ethernet cables are Cat5e, Cat6, and Cat6a. The choice depends on your specific requirements and the speed you want to achieve.

    ReplyDelete