PS Audio on "Software Jitter"

Here's an interesting if brief article on what Paul McGowan, CEO and co-founder of PS Audio Inc., has dubbed "Software Jitter". While I think it may be more accurately stated as software-induced jitter, the basic concept is interesting and merits some thought:
Software jitter is audible timing errors in digital audio that happen in response to a CPU or internal processor’s needs for power. As the demands on a piece of equipment’s processor increase, so too do the demands on the power supply feeding that processor, which in turn can cause slight changes in the 1 to 0 transition thresholds and thus you get jitter – which we hear as a loss of naturalness of the music and three dimensional presentation.
The article also calls software jitter "a new form of distortion" but I'm not so sure new is the case. But let's focus on the key phrase in the above quote—slight changes in the 1 to 0 transition thresholds. Jitter is just a matter of time or more specifically timing errors introduced into the data stream in this case supposedly caused by power supply fluctuations resulting from varying loads on the CPU. Or so McGowan's argument goes. But aren't DACs designed to minimize jitter beyond audibility? And aren't all bits just bits?

For a nice Intro To Jitter, I'd point you to John Atkinson's article titled, "Jitter, Bits, & Sound Quality", which first appeared in Stereophile in December of 1990. Here you will read that even an impossibly tiny variance in timing accuracy can translate into audible degradation:

Uncertainty in the precise timing of that digital one or zero results in a loss of system resolution, with audible effects on the finally recovered analog signal. In November's "Industry Update" (Vol.13 No.11, p.78, see Sidebar), Stereophile's Dutch correspondent Peter van Willenswaard neatly showed how an uncertainty of well below 1ns—one billionth of a second!—in the timing accuracy of a 16-bit digital datastream resulting from an original analog signal sampled every 22.7µs, a time interval nearly 23 thousand times larger, equated with a loss of one bit's worth of resolution.
Of course from a practical perspective, people have been espousing the benefits of minimizing the load on a computer serving music for ever. And the basic premise is to improve sonic performance. If we couple the notion that even slight errors in the timing of data transmission can be perceived as having a negative sonic impact with the idea that software can introduce these timing errors based on the load imposed on the CPU, we can perhaps begin to explain why different Media Player software can sound different or perhaps why in some situations on some computers compressed audio formats sound different from uncompressed formats during playback.

Paul McGowan raises some of these issues:

Differences between music server programs such as iTunes, eLyric, J River, Amarra, Pure Music etc. have as much to do with the way the software itself is written as it does with the computer that is running the program. So, the computer you choose plus the program you use will all affect the sound quality.
He goes on to share how users of the PS Audio PerfectWave DAC Mark II noticed sonic differences between two versions of firmware where the only difference between the versions was how they managed the front panel display.


Vincent Kars's picture

I must admit the tile of his article “A new form of distortion” made me smile.

This is what has been discussed on a forum like Computer Audio Asylum for half a decade.

But probably new to McGowan.