There's no such thing as digital: A conversation with Charles Hansen, Gordon Rankin, and Steve Silberman. Part 2

(left to right) Steve Silberman, Gordon Rankin, Charles Hansen. photo credit: Steve S. and Stereophile

During our first conversation with AudioQuest's Steve Silberman, Charlie Hansen of Ayre Acoustics and Gordon Rankin of Wavelength Audio we talked about the notion that "There's no such thing as digital" (see Part 1). In Part 2 we move on to ask, what can be done to address the problems of high-speed analog signals in a modern audio system?

What can be done to address the problems of high-speed analog signals in a modern audio system?

Charles Hansen: One of the things is to redesign the system architecture. This allows for higher performance at a lower cost. The most sweeping example of this was when Gordon Rankin introduced a DAC that transferred the data in the "asynchronous isochronous mode" that had been part of the USB specification since 1998. The reason that it took so long was that it required a tidal shift in the way that music was distributed, stored, and sold.

"...the only standard for transferring digital data, the S/PDIF connection, was fundamentally flawed."

In the past, Ayre only made one-box digital players. That was because the only standard for transferring digital data, the S/PDIF connection, was fundamentally flawed. The only reason it even existed was that it allowed complex machines rolling off of the production line to be tested by linking a single cable to a specialized piece of test equipment.

But by mixing the audio data with at least two other clock signals (using a technique called biphase-mark encoding), it ensured that jitter would be added to the signal, regardless of how good the transport was. The only true cure would be to have a transmitter with an infinitely fast rise time and a cable with an infinitely wide bandwidth.

Since these things clearly don't exist in the real world, many companies had all kinds of work-arounds to try and improve the performance of this ill-conceived data link. They fell into two categories:

1) Throw money at the problem. By using more and more elaborate systems of buffers, phase-lock-loops, and/or other innovations, the jitter levels could be lowered, but never eliminated. This was essentially a case of throwing money at the problem.

2) A few companies made proprietary links whereby if you purchased both the transport and the DAC from the same company, they could eliminate the jitter introduced by the interface completely. These all used a separate signal line for the clock, whereby the master audio clock was in the DAC box and sent upstream to the transport. This is a relatively elegant system, but there was certainly no attempt to try and create or promote any industry-wide standards for this. In the worst cases the transport's laser would fail and no repair parts were available, rendering transport and DAC both into useless scrap metal boxes.

The customer wants flexibility, but it usually has a cost associated with it. In the case of S/PDIF it was a case of higher costs and lower levels of performance. That is 180° opposed to our philosophy so we never went down that road.

It was only when Gordon [Rankin] cracked the asynchronous USB nut that one could use any low-cost computer as a high-performance transport with any compliant audio DAC. It turned out to be a Herculean task, but Gordon persevered and changed the face of digital audio. There was no question of compatibility and so Ayre introduced our first two-box digital product.

"In theory (at least), the only place where jitter matters, is when you are changing domains, from analog to digital or the other way around."

What this really boiled down to was a fundamental change in the architecture of the digital playback system. In theory (at least), the only place where jitter matters, is when you are changing domains, from analog to digital or the other way around.

A good counter example of this is modern digital video. The cameras shoot in digital video -- film is no longer used. All of the editing and effects are done digitally. They can make films like "O Brother, Where Art Thou?" and add a sepia tone to the entire "film" using digital color correction. Then it is compressed using lossy techniques (streaming video is highly compressed, sort of the MP3 of video, while discs undergo much less compression and yield noticeably better sound and picture quality).

After it is played on your digital player (DVD or Blu-ray), all modern displays (LCD, plasma, and OLED) are digital, and so these timing errors don't create problems as they do with audio where timing errors during conversion back to analog are equivalent to amplitude errors. Nevertheless, there are still highly significant differences in picture quality, due to noise.

Any noise in the system, common mode, differential, ground loops, high frequency interference, coupling between traces, ground bounce, reflections due to impedance mismatch -- there are literally hundreds of sources -- will create the same kinds of subtle distortions that jitter creates with audio.

"The thing to remember is that digital systems are not immune to degradation due to noise."

The thing to remember is that digital systems are not immune to degradation due to noise. They tend to be much more highly resistant to noise than analog systems, but noise in any system will cause performance degradations. Even when the noise is well below the level required to "flip a bit", it will cause timing errors. This is not the overworked imagination of audio engineers. This is standard textbook theory covered in dozens of standard texts by Ralph Morrison, Henry Ott, Howard Johnson, and many, many others [footnote 1].

Gordon Rankin: Another area to tackle is what is referred to as signal integrity. The signal leaves a transistor or IC chip and it has to make its way across the PC board, component-by-component so that the signal is degraded as little as possible. When you are talking about what makes one transport a "good sounding" one, again we are talking about treating so-called "digital" products as very high speed analog circuits. The clock frequencies in these units are typically between 10 and 100 MHz. When considering a square wave, a convenient rule of thumb is that the bandwidth must extend in both direction (higher and lower frequencies) by a factor of at least 10x to preserve the waveform fidelity.

"When you are talking about what makes a transport "good" one, again we are talking about treating so-called "digital" products as very high speed analog circuits."

So designing a high performance digital circuit means that you are essentially designing high performance analog circuits that have a bandwidth extending up to at least 100 MHz, and in some cases all the way to 1 GHz. The traditional rules of PCB layout connectors, signal routing, ground planes, solder joints, PCB materials, and even PCB coatings break down at these high frequencies. My first circuit board experience was with a tech at Ohio Northern. Before the days of personal computers, we used overhead clear 8-1/2"x11" pieces of plastic and printed on these and made our own top and bottom templates with copper and then hand etched and drilled them. My first project was a subwoofer electronic crossover from a self taught class for which I used the VAX 11/780 to develop a LaPlace transform of the speaker system for better phasing and output integration. For the second project they made me design a computer because I had already built six complete stereos, turntables, and a radial tonearm. So I made a Z80 system with 128 bytes of memory.

After college we used tape to layout circuit boards at 2x the size. It took forever to do that. Along came the PC and things really shot forward but the first PC layout applications were just OK. The big problem as things got faster was twofold; one there was no way to measure and assure high speed links, and two most of the corners were square or 45 degrees which results in reflections and degradation of the signals.

I think sometime in the '80s we got one of the first packages designed for the personal computer which was still pathetic in that it was hard to create plane layers used for grounding and shielding. It did give you multiple layer designs for which we did I think a 24-layer board. A later version did rounded corners which made it my go-to tool for a decade or more. But still in this time period we were plotting 2x size drawings and sending NC (Numerical Control) drill files on floppy disks.

After the year 2000 things really started to get better for us as the software become more sophisticated and we had flooding [to make power planes and ground planes], rounded traces, and impedance matching. This gave us things like the ability to match S/PDIF 75 ohm inputs and outputs (though it took a lot more than that to fix that problem).

Today we can integrate designs into enclosures and off board items and there are more design rules checking and spacing which makes PCB design work better than ever. In the past you would do a board a couple of times and the radiation patterns or the square wave response or the digital noise would be polluting the analog signals. It was a mess. First, boards would look like Frankenstein experiments gone wrong. There would be wire wrap wire, resistors and caps and other things all over the board. You would take that and make it over and over again until it was ready for production. Now with the new tools and ever smaller components the job is much easier with fewer errors and many fewer iterations required.

CH: Again, these are all analog effects and these are the factors that cause the degradation in the digital waveform that can have audible consequences. And again, these are not necessarily taught to digital engineers. There are undoubtedly special classes that address these issues, but they are not in the mainstream of the digital engineering curriculum.

"Any degradation of the digital waveform will have consequences, and when that waveform is degraded, there is no way to restore it properly. Once the fine detail is lost, it is lost forever."

All of these issues are present in both the data source (transport) and the data receiver (DAC). They must be addressed as fully as possible at both ends. Any degradation of the digital waveform will have consequences, and when that waveform is degraded, there is no way to restore it properly. Once the fine detail is lost, it is lost forever.

Of course that leads us to the fact that to transport the data between two separate boxes requires a cable (unless one is planning to use a wireless link, which has not only potential long-term health risks, but also can never match the performance that is available from a physical data link. Even worse, the RFI gets into the circuitry and causes subtle degradation of all kinds of things that affect sound quality.)

So I will let Steve talk about cable technology and what is required regarding getting a signal from one box to another while preserving the maximum waveform fidelity possible.

Steve Silberman: As a baseline, preservation of the waveform requires making a cable and connector that conform to the established specification for the application. Bill [William Low, AudioQuest's Founder and CEO/Designer] always says: "Tell us the rules, describe the playing field, and then we'll play a better game within the established guidelines." Whether HDMI, USB, FireWire or Ethernet, Bill never strays from playing by the rules, though he always intends to field better players on a better team.

That doesn't mean that every aspect of the standard design doesn't get intensely scrutinized. Almost always the basics make sense and AQ has no reason to bend the rules. An exception which practically proves the rule, is with S/P-DIF digital coax. The specification is for a cable with a characteristic impedance of 75 ohms. However, in the real world, S/P-DIF coax connections almost always use RCA plugs. Back in the analog video days, this provoked some cable suppliers to claim that they made 75 ohm RCA plugs, which was not only an extremely questionable claim in most cases, but also totally irrelevant as the plugs would then me inserted into 50 ohm RCA jacks. The RCA plug and jack are by definition, as a result of their shape, 50 ohms.

"The "problem," the reason for caring about impedance at all for such short cables, where transmission-line effect and benefits do not apply, is to try to avoid the reflections caused at every point of impedance change."

The "problem," the reason for caring about impedance at all for such short cables, where transmission-line effect and benefits do not apply, is to try to avoid the reflections caused at every point of impedance change. From this perspective, if a cable has to go into 50 ohm connectors on both ends, then a 50 ohm cable will significantly reduce the reflections caused at both ends of the cable. The 50 ohm RCA jacks are part of the hardware, and we can only hope that the hardware designers have been as aware of the compromises required when maximizing signal integrity, and audio quality.

Once the basics, the foundation, such as the characteristic impedance, inductance, capacitance, shielding and such have been determined, then the many value-added variables such as conductor quality, both the mechanical and phase integrity aspects of the insulation, dielectric biasing, AQ's Noise-Dissipation System, etc., are implemented according to efficacy and budget.

"There is more than one level of cable performance because as with all things analog, there is no black & white, only shades of gray."

This whole discussion is about how digital is in practice really analog. There is more than one level of cable performance because as with all things analog, there is no black & white, only shades of gray.


Footnote 1
For those who are truly interested in diving in deeper...

High Speed Digital Design: A Handbook of Black Magic
Howard Johnson, Martin Graham, 1993

High Speed Signal Propagation: Advanced Black Magic
Howard Johnson, 2003

Electromagnetic Compatibility Engineering
Henry W. Ott, 2009

EMC for Product Designers, Fourth Edition
Tim Williams, 2007

Grounding and Shielding: Circuits and Interference
Ralph Morrison, 2007

Digital Circuit Boards: Mach 1 GHz
Ralph Morrison, 2012

Solving Interference Problems in Electronics
Ralph Morrison, 1995

Noise and Other Interfering Signals
Ralph Morrison, 1991

Share | |
COMMENTS
deckeda's picture

Ask most people about digital audio and they're thinking in terms of software and formats. And while aware of analog's specs, they're really thinking in terms of hardware there.

I thank these gentlemen for reminding us that some basic hardware design still matters. Seems to me however, the opportunity still exists in manifesting these important aspects in easily measurable ways that are easily understood, if not explained.

What I mean is, we're still in the dark ages of not being able to describe what we hear. With the worst of analog's failures, obvious bogeys such as poor S/N ratio or freq response could be quickly groked by anyone. But when you begin to describe why an MP3 sucks or why a shitty DAC pales, we fall back on a few analog-y descriptors that cause normal citizens to declare BS.

We need a lexicon before we can universally identify some of these targets on a scale where they're tackled more thoroughy.

Archimago's picture

"We need a lexicon before we can universally identify some of these targets on a scale where they're tackled more thoroughy."

Sure, lexicon is one thing... But however one describes the subjective qualia, I think many are also concerned about the lack of adequate DATA that provides for ACTIONABLE EVIDENCE that there's even an "issue".

For example, where is the data to suggest that jitter is an AUDIBLE issue in decent equipment in the 2010's needing further research and equipment upgrade?

I certainly commend the engineers for all the hard work and what has been achieved with improving SNR, jitter, etc. over the years. Surely, there must come a point where it's beyond human perception for a decent DAC (we're not talking about Walmart stuff here!). 30 years into the digital audio 'revolution' is an eternity in technological terms... I would imagine that this also means the hardware is quite mature at this point.

jim tavegia's picture

I still think that firewire could/should have been the condiut for hirez audio DACs, especially with all the usb peripherals in use, but that is just me.  The computer industry never really cared about audio and especially audiophiles. Many people have to buy new and more wattage computer power supplies (750 and up)  to hande the updated video cards today. 

I'm glad you made usb work. 

Fafner's picture

The title of this article, "There's no such thing as digital", is appealingly provocative.  Unfortunately, Mr. Hansen got a little carried away by it. Pronouncements such as

 

The thing to remember is that digital systems are not immune to degradation due to noise. They tend to be much more highly resistant to noise than analog systems, but noise in any system will cause performance degradations. Even when the noise is well below the level required to "flip a bit", it will cause timing errors.

 

and

 

Any degradation of the digital waveform will have consequences, and when that waveform is degraded, there is no way to restore it properly. Once the fine detail is lost, it is lost forever.

 

are not true in general, and Mr. Hansen surely knows that.  In a purely digital system, the signal presented to a logic gate is interpreted as "low" or "high" as long as it is either below or above a threshold at the time of a clock signal.  Small variations due to noise or any of "hundreds of sources" have absolutely no impact as long as they are not sufficiently large to "flip a bit".  Likewise, variations in the timing of the clock don't matter as long as they are not so great that a clock signal arrives before signals have settled, in which case they again might cause a circuit to "flip a bit".  This indifference of digital circuits to noise is precisely the great advantage that digital circuits have over analog ones.

 

In case these comments about "threshold", "clocks", "gates", and so on have confused any readers, consider this analogy:  Suppose that I ask you to repeat a sequence of integers.  I say "four", you say "four".  I say "eight", you say "eight".  I say "three point one", you pause and think, "Wait, he said that the sequence would be integers, but 3.1 is not an integer.  He must have meant three.  You respond three".  Perfect.  See how your knowledge that the numbers were supposed to be integers allowed you to correct an error.  Once you made that correction, the sequence was restored to perfection -- as if the error had never occurred.  If I make a big error -- say 3.9 -- you are likely to think that I must have meant 4 instead of 3.  As long as my error is not too big, you are able to correct the error.  Otherwise, you will guess wrong and propagate the error.  And so it goes with bits.

 

The one place where these effects do matter is in circuitry at the cusp between analog and digital.  Signal levels relative to the threshold on the digital side still do not matter, but timing does.  The "hundreds of sources" can affect timing -- creating jitter -- and they can leak into the *analog* side of the interface, creating subtle distortions there.  However, a discussion of these effects belong in an article about analog circuits, not this one.

 

It is true that there is no such thing as digital in the real world, but in purely digital systems the distinction is important only when corruption is so severe that it leads to bit flipping.  When corruption is that large, there is certainly a problem.  But to assert that "once the fine detail is lost, it is lost forever" in purely digital circuits is simply wrong.

isa's picture

I gotta say I'm thankful I found this piece.  I just began trying to build a good system, and I found this site.  While trying to determine the site's credibility, I found this article.  I now have my answer.

X
Enter your AudioStream username.
Enter the password that accompanies your username.
Loading