Audio Industry is Affected by Computer Audio . Getting good Sound from Computer Audio is very Complex Challenge for Audio Designers Professionals Like Gordon Rankin (Designer of Wavelength Audio) who Have Tried to Improve Sound of Computer Audio. In Amir Audio We Had good Result from Roon Ready Weiss DAC 501/502. Roon Software (in Roon Ready Mode) use Ethernet Connection between Computer (or Music Servers Like Weiss MAN301) and DAC. We never had good result from Ripped files and we only use Tidal streaming Services. Tidal on Roon lets you play all your favorite music in High Fidelity sound quality.
In an electrical system, a ground loop or earth loop occurs when two points of a circuit are intended to have the same ground reference potential but instead have a different potential between them.This can be caused, for example, in a signal circuit referenced to ground, if enough current is flowing in the ground to cause two points to be at different potentials.
Ground loops are a major cause of noise, hum, and interference in audio, video, and computer systems. Wiring practices that protect against ground loops include ensuring that all vulnerable signal circuits are referenced to one point as ground. The use of differential connections can provide rejections of ground-induced interference. Removal of safety ground connections to equipment in an effort to eliminate ground loops also eliminates the protection the safety ground connection is intended to provide.
A ground loop is caused by the interconnection of electrical equipment that results in there being multiple paths to ground, so a closed conductive loop is formed. A common example is two pieces of electrical equipment, A and B, each connected to a utility outlet by a 3 conductor cable and plug, containing a protective ground conductor, in accordance with normal safety regulations and practice. This only becomes a problem when one or more signal cables are then connected between A and B, to pass data or audio signals from one to the other. The shield of the data cable is typically connected to the grounded equipment chassis of both A and B, forming a closed loop with the ground conductors of the power cords, which are connected through the building utility ground wire. This is the ground loop.
In the vicinity of electric power wiring there will always be stray magnetic fields oscillating at the utility frequency, 50 or 60 hertz. These ambient magnetic fields passing through the ground loop will induce a current in the loop by electromagnetic induction. In effect, the ground loop acts as a single-turn secondary winding of a transformer, the primary being the summation of all current carrying conductors nearby. The amount of current induced will depend on the magnitude of nearby utility currents and their proximity. The presence of high power equipment such as industrial motors or transformers can increase the interference. Since the wire ground loop usually has very low resistance, often below one ohm, even weak magnetic fields can induce significant currents.
Since the ground conductor of the signal cable linking the two pieces of equipment A and B is part of the signal path of the cable, the alternating ground current flowing through the cable can introduce electrical interference in the signal. The induced alternating current flowing through the resistance of the cable ground conductor will cause a small AC voltage drop across the cable ground. This is added to the signal applied to the input of the next stage. In audio equipment such as sound systems, the 50 or 60 Hz interference may be heard as a hum in the speakers. In a video system it may cause onscreen “snow” noise, or syncing problems. In computer cables it can cause slowdowns or failures of data transfer.
Optimum performance of the highly sophisticated electronics that comprise today’s high-end audio system is hugely dependent upon a clean and stable supply of electrical current. Conversely, the modern day world, with its myriad of electrical devices, does much to degrade the electricity that we receive.
The AC Power Quality has a significant effect on Sound Quality but Most AC Filters and AC Regenerators in the Market Kill dynamics.
We Recommend using Class D AC Regenerators like Pure Power or Victron Energy Inverters/Chargers.
Every component in an audio system is sensitive to AC polarity. Ensuring that your electronics are connected to the AC line with the correct polarity is essential if you want to realize the full potential of your system.
Some believe Correct AC Polarity is related to lower Chassis Voltage when the ground lifted but There is no need to measure the Chassis Voltage (to Ground Potential) because in some rare instances, the higher reading Voltage will produce better sound from the component. the best way is listening and detecting the Correct AC Polarity.
Plug the component into the wall socket and turn on the power switch. Listen to Music, then Unplug the component from the wall socket, Reverse the position of the plug in the wall socket and repeat Listening. The correct A/C alignment will be the one that gave the best sound.
If you aren’t sure how to set up your speakers, and you want the best sound quality you can get out of them, then give strong consideration to hiring Bob. He is the expert in The Rational Speaker Placement process. He will place and position your speakers in the ideal location in your room for sonic excellence, resulting in maximized performance of not just your speakers, but your entire hi-fi system.
This is an article that first appeared in our new online PDF, downloadable magazine The Occasional
last fall in it’s inaugural edition. We’ll be rolling out articles from
it over the next week in anticipation of our upcoming second issue
which is scheduled for publication February 3rd. We hope you enjoy this
new, exclusive content, and that you’ll check out the Winter Edition of The Occasional when it drops 140 pages of fresh high fidelity reviews, audiophile gear highlights, lifestyle stories, and editorial opinion.
By Peter Qvortrup
Since the dawn of time music has played an important part
in human life, whether at the top or bottom of society, people have
participated, and listened to music in its many forms as it has
developed through countless generations to the present day.
Instruments have developed to allow increasingly complex, and expressive music forms until the peak was reached sometime in the latter part of the 19th century, coinciding with Thomas Alva Edison’s invention of recorded sound. Before successfully inventing recorded sound, Edison must have arrived at a fundamental realization that sound can be entirely characterized in two dimensions. His first cylindrical recording was nothing more than a rough approximation of the changes in amplitude pressure (caused by the modulation of his voice), plotted against a constant time base (generated by the steady turning of a crank). Crude as his technique may have been, sound was recorded, and recognizably produced for the first time in history.
Thomas Edison, c.1877. Photo courtesy of the U.S. Library of Congress.
The limiting factor in Edison’s first experience was not
his idea, but his hardware; further improvements could only come from
refinements in technique.
So here we are, just over 120 years after the invention of sound recording, and how much further have we really got?
When looking back over the past century of audio
developments it is interesting to note that improvements in technique
have not happened simultaneously in all branches of the ‘tree’ that
makes up the music reproduction chain, as we know it today. Rather it
‘jumps’ where one part has suddenly moved forward, only to leave other
parts of the chain badly exposed. I
could mention several examples of this, but suffice to say that a very
clear example was when we went from 78rpm records to 33rpm microgroove
LPs. The replay hardware left a lot to be desired, and eventually this
inadequacy led to a severe reduction in the quality of the software.
Progress is not a straight line and this situation has
repeated itself numerous times over the history of recorded sound, and
as a result, music recording, and its associated recording, and
reproduction equipment, has fluctuated greatly, and peaked in several
areas of performance only for the improvements/developments to be
reversed due to the limitations of another branch or branches of the
system. It is, in my opinion, of the utmost importance to study the
historical development of each branch of the music reproduction chain,
in order to ‘glean off’ the best (most interesting?) developments, and
then resurrect them to compare the overall result with what is
considered the ‘state of the art’ today. It makes for an interesting
comparison, and teaches important lessons in why methods, and
technologies were discarded – mostly, I am sorry to say – due to
economic considerations rather than qualitative ones.
To try to visualize the argument, I have constructed this chart to attempt to demonstrate how I see the peaks, dips and troughs of development in the five main areas of the music reproduction chain. Firstly to demonstrate the relative lack of real progress in sound quality in absolute terms (so much is, and has been made of the High-End Audio industry’s obsession with “State-of-the-Art” technology, that this serves as a potent reminder of the numerous areas where current technology is anything but “State-of-the-Art”), in order to analyze why, historically speaking, developments took a turn for the worse. Remember this is very broad historical/empirical overview, not a detailed study. A book will be written on the subject if ever I find the time, so here are the five “branches” as I would define them.
To support, and understand the thesis it is important to
stress that the positions of each branch represents what should be
considered the best effort available at the time, not necessarily what
was well known or prominently marketed. A view has also been taken on
the ideas, and principles behind the equipment in question, so that
advances in technique – when applied to the older ideas – have also been
The points system is applied to simplify the overview, and
allow for comparison between the decades in absolute terms, and as can
be seen we have not moved forward since the 1950s. In fact, we have
reversed most of the absolute peaks of development relating to the
position of each branch wave on the above scale. It is fairly easy to
see that the audio quality in its absolute form peaked around 1960, and
that this was primarily due to the superior quality of the recording,
and software quality, the decline since then has a number of
explanations, some of which I will attempt to address in the rest of
With the benefit of hindsight both recording, and software technology reached a peak during the 1950s, so by early 1960s recording, and software quality peaked, which goes some way towards explaining why records, and recordings from this period are so highly desired, and prized by today’s music lovers, collectors, and audiophiles (don’t you just hate that word).
The addictive Long Play microgroove.
Relative to the quality standards of recording, and
software manufacture, the replay equipment was at a very crude stage in
its development at the time of the introduction of the microgroove mono
LP in 1948/49, and developed very slowly until the early 1970s. In my
estimation, it really only reached its peak about 1985, with the
introduction of the Pink Triangle, and Voyd three-motor turntables, the
Helius tonearm, the van den Hul diamond stylus shape, and Mr. Kondo’s Io
cartridge with its titanium cantilever. It is therefore fairly easy to
understand why record companies could reduce the quality of the
LP-software (in many cases this was actually an “improvement” in the
sense that you could now play the record without severe mistracking)
without noticeable quality loss. Anyone
who has tried to play a Decca or RCA opera record like the Decca
recorded RCA release of “The Force of Destiny,” with the Rome Opera
conducted by Previtali, and di Stefano/Milanov on RCA will seriously
wonder how this could possibly have been tracked by an average 1959
tonearm/cartridge combination. No wonder Decca had such a high return
rate of their LPs at the time.
Amplification reached its peak earliest of all in the
1920s or early 1930s, and only by 1989/90 had it re-established or
exceeded the quality level with the re-introduction of the single-ended
triode amplifier (SET). As a
side note here, it has always amazed me that no magazine has ever made a
challenge of the decades, where they compare what could be considered
the best amplifier at the end of each decade, to see if we have indeed
moved forward in absolute terms. I have done this comparison on several
occasions which is one of the reasons why I decided to write this
article. I can tell it is a more educating experience than any review.
A similar comparison should be made between older, and
newer models of different manufacturers products, to establish whether
their claims to have actually moved forward, or whether (as might be
suspected) their claims of continual progress ring hollow.
Loudspeaker technology was only invented in 1924, and is considered to have peaked in the late 1930s. It has to be remembered that loudspeaker technology is by far the most expensive audio technology to research, and develop, and that most of the really serious development took place in cinema sound, not in home music reproduction systems. It is only with the benefit of hindsight that this becomes really obvious. To me, a sole loudspeaker product stood out in the 1980s: the Snell Type A/III. This is why speaker technology did not drag the result down even further.
Simple wooden boxes.
In reality, if you compare the very best products
available during each decade from about 1930 on, very little progress
has taken place. This is undoubtedly due to the widely disparate levels
of development (or in some cases refinement?), in each of the branches
of the “audio reproduction tree” at any given time, as well as
increasingly commercial considerations regarding cost, finish, and
appearance, as the audio industry started to aspire to commercialism in
the late 1950s and 1960s. Most of these later decisions have not benefited or furthered the goals of “Higher Fidelity.”
It is almost paradoxical that an improvement in one branch
of the system has invariably been counteracted by a severe decline in
another branch – or should I say “have allowed a reduction of quality to
take place unnoticed in another branch” – thereby leaving a kind of
“balance” which has meant that sound quality has not changed much over
the past 30-40 years.
In order to try to understand why real development towards
“High-Fidelity” has not progressed further than it has, the above
historical diagram helps to visualise how I see the deterioration and
improvement in quality that have happened since the introduction of
Our measurement technology has certainly contributed
considerably to the slide in absolute quality, “the same for less” is
the motto that has been applied time, and again when engineers have
discussed measurements as a direct proof of sound quality. This approach
invaded the industry in the 1940s, via David Theodore Nelson
Williamson’s high-feedback amplifier design, and measurements used as a
proof of better sound has become increasingly dominant to this day.
Every new technology that has been introduced has
generally started its life just as another branch on the audio system
“tree” has reached its peak, but there has always been another trough
somewhere else to put the blame on when new technology did not provide
the necessary (claimed) improvements in overall performance. Our belief
in the idea of progress has most certainly supported this development.
It is almost certain that when the transition from triode
to pentode was made, software quality can be at least partly blamed for
the demise of the triode, the move from 78s to microgroove LPs helped
conceal the real inferiority of the pentode. The result was the pentode
being the victim when the transistor amplifier was introduced. This
happened almost simultaneously with the introduction of stereo, a time
when software, and recording quality was at its absolute peak, so the
reduction in sonic quality that the transistor introduced was more than
counterbalanced by the improvements in software quality.
As we approach modern times, the increasingly well developed “objectivist” technological dogma, combined with the growing post-war belief in the idea that progress is indeed a straight line, which, combined with better, and more clever marketing techniques originally used to “rally the troops” during the national emergencies of the 1930s, and 1940s helped create a shift in the market aimed at higher proliferation, and profitability to the detriment of absolute sound quality. To me, it is perfectly clear that the resources being invested in improving absolute quality had largely evaporated by 1965, as marketing men, and accountants took over the main budget in most commercially-successful companies, a fact that seems to have gone unnoticed by both the audio press, and buying public.
The oft-maligned CD.
Remember the excuses for the Compact Disc in the early
days? “It is too accurate for the rest of the audio chain, and is
therefore showing it up…” or some such nonsense. How about the current
speaker/room interface debate? The room is the “last frontier.” Speakers
would sound perfect if only the rooms they were used in were better. The
fact that most loudspeakers (whether claimed to be of high-end quality
or not) are designed under near-field anechoic conditions compounds the
issue in my mind. What use is there in designing under these conditions
when most people sit at three-to-six times the distance used to measure
speaker response? On top of that, who lives in rooms with an acoustic
complimentary to an anechoic chamber? These little truths do not seem to
have impacted the designers of modern loudspeakers at all.
The list of creative excuses for the lack of real
research, and thus the questionable or downright incompetent performance
of much of the so-called high quality equipment sold is endless, and a
strongly contributing factor to the early fragmentation of the audio
industry into many companies: Each specialising in one branch
(amplifiers, speakers, cables, cartridges, etc.) which allowed a kind of
collusion, where the fault for the poor performance of any piece of
equipment could always be explained away by a lack of compatibility with
the other equipment in whatever system was used to pass judgement.
Synergy became the illusory goal of disillusioned music lovers through
endless “upgrades” until most gave up.
Which is another item that has always made me wonder why
reviewers, and hi-fi magazines have not applied a review process that
reduces this problem. The solution is simple, you ask the manufacturer
of whatever piece of equipment you want to review to submit a complete
system within which he believes that the specific piece will perform at
its best, and then review the overall result, together with a conclusion
on how the reviewer (or reviewers) feel that the manufacturer has
achieved his stated goals with the product in question. This would
remove 90 per cent of manufacturer excuses, and give the reader (end
consumer) a much better idea of what to choose.
If more time had been spent investigating the fundamental problems, and less time on constructing a great marketing story, and subsequent excuses (where necessary), perhaps we would have better, and more satisfying music reproduction equipment today.
Back once again: The Single-Ended Triode.
In 1948-49 the single most important negative development,
after the abandonment of the single-ended triode, and the introduction
of negative feedback – invented in 1928 by Telefunken – was the 1947
Williamson article in Wireless World that launched the
foundations of the single most important theory now ruling audio design:
Specifications as a measure of sonic quality.
This theory was quickly picked up by great marketers like
Harold Leak, Peter Walker (of Quad), A.Stewart Hegeman, David Hafler and
This was, in my opinion, the most damaging single-theory to be imposed on audio design. This
suggestion that sound quality, and measured quality (as exemplified by
distortion, bandwidth, and noise measurements as we know them) are
directly related to sound quality became the most compelling theory
going. Why? Because it is very simple, and its very simplicity makes it
the most powerful marketing tool ever handed to the audio industry. It provides the manufacturer with “proof” that his audio product is “better” than the competition. What more could you want?
It has single-handedly created the ideological basis for thousands of minute, and incremental quality reductions. In part because it has made it possible to make products that measure the same at increasingly lower prices, often using technologies poached from other branches of electronics – which in the absence of any real research into music reproduction techniques – provides a powerful substitute, and excellent surrogate to feed an unsuspecting, and gullible public seeking a music system of quality. It is this public who can be easily fooled into believing that when two products measure the same then they ARE the same, and the buying decision then becomes a simple question of price. How brutally simple, and incredibly effective.
To oversample or not to oversample?
Digital Audio is the latest example of how the high-fidelity industry has distorted the concept of research, and improvement. Since the introduction of the Compact Disc in late 1982, the technology has entered into the usual numbers game. 20-bit is better than 16-bit, 96kHz is better than 44.1 kHz, 24-bit is better still, as is a 192kHz sampling rate, and so on. In addition to this, music is now becoming virtual, and very few customers actually know what level of quality they are downloading onto their computer. Claims from the file-providing services that their files are stored using the ultimate lossless, high-resolution codec, coupled with the convenience of a song at your very finger tips makes this path a rather compelling one for music consumers. And does all the necessary hardware, and software upgrades necessary to play back these new file codecs make manufacturers happy or what? Believe me when I say that a critical-listening session comparing high-resolution files to a well-recorded CD on a decent CD transport quickly dispels any claims of higher bitrates as just another marketing-based illusion.
Peter Qvortrup at home, May 2017.
Conclusion, Or Some Such
I started writing this discussion piece in the first half
of the 1990s, and it has stayed on my computer for years, being taken
out, and “brushed off” from time to time. The piece you have just read
was intended as a “taster” to a much longer series of articles
discussing in much more detail each of the five branches of audio, and
music recording, and reproduction. I still intend to write these
articles delving into each of these technologies, so perhaps we will
Are You on the Road to Audio Hell? The quiz-We Audiophiles are always trying to sharpen our skills at evaluating audio components. However, the very methods we use can result in precisely the opposite of the effect desired, namely boredom or frustration with our audio system before we have even paid for it; in other words,AUDIO HELL. Take the following short quiz to help determine if you have travelled this road lately.1.Do you try to arrange instantaneous A/B comparisons of brief segments of music to maximize your memory retention?2.Do you bring the same group of “reference” test recordings to each audition in an effort to sort out specific performance capabilities and to prevent any disorientation or confusion which could result from using music with which you are unfamiliar? 3.Do you avoid using music of which you are particularly fond so that you can properly attend to objective analysis rather than be distracted by the music’s pleasures and passions?4.Do you believe that the true function of an audio system is to recreate music; and that therefore you can only accurately evaluate audio playback if you have an extensive knowledge of live music performance?5.Do you believe that if your evaluation addresses such matters as frequency range, signal/noise ratio, stage size and depth, instrumental separation and balance, timbre, and textual clarity that whatever other purely musical considerations there may be will take care of themselves?6.Has it been your experience that some speakers are especially suitable for rock, others for classical and perhaps others for intimate jazz? How do you explain this phenomenon? Is this more or less inevitable?7.When you ask yourself: “What should be the correct reference, live music or the recorded session?” do you conclude that it is one or the other? Are you comfortable with your answer to this question?If you have answered “yes” to at least three of these questions, you can feel comfortable knowing that, like many other audiophiles, you are on the train to AUDIO HELL. If you answered “yes” to most of them, you may be beyond redemption; but we are here to help, and there is always hope. If you answered “yes” to question #3, you probably require the services of an audio exorcist; for if the purpose of your music playback system isn’t to involve you emotionally, then why aren’t you shopping at Sears? Before we take a more critical look at the implications of this quiz and your answers, it might be useful to go review the past few years to see how we got into this mess in the first place.A Brief History As the audio industry grew out of its infancy in the 1950’s and began to aspire to commercialism in the 1960’s, an evaluation and review procedure was adopted which initially attempted to mate the measured superiority of the developing technologies with the goal of better sound quality. It appeared that a conspiracy of purpose was entered into by the press and many companies in the industry based on the thesis that technical perfection -as demonstrated by measurements of particular specifications assumed to be relevant as well as correctly obtained -also led to sonic perfection. This thesis had the advantage that winners in the performance race could easily be decided by the evidence of such measurements. Such “proof” made possible facile marketing strategies which have persisted to the present despite overwhelming evidence to the contrary provided by our own ears in the most casual of listening auditions. By the mid-1970’s the development of this thesis had reached a stage with audio components where technical specifications were making further improvements practically impossible. The race for lower distortion, faster slew rates, better damping factors, wider bandwidths and more power had caught up with itself and ground to a halt.At about this point, a number of smaller publications appeared which abandoned this thesis of measured performance (a kind of technical perfection) in favour of a more subjective approach in which listening to music through the components was considered the more useful tool; and its approximation to “live music” its most sought after criteria. The editorial position of some of these new “underground” magazines considered measurements as irrelevant or even damaging to the evaluation process, observing that audio components which measure the same can sound strikingly different. The result was that the method of auditioning equipment became more complicated; magazine reviewers spent hours listening to and comparing different components in order to decide which sounded best. Out of this history was born the “Golden Ear” upon whose judgement many consumers trusted with their available income. Every month a new product would appear which was hailed as the “best sound” and frequently the opinions of different magazine experts varied widely. Consumers might then choose an expert that they trusted, or become increasingly confused, or give up altogether, returning to the safer criteria of measurements.By the mid-1980’s the merry-go-round had reached such a pace that most manufacturers resorted to placing their efforts in the tried and true marketplace of seductive advertising slogans and images, and hi-tech cosmetics and gadgetry. It had become too difficult to compete otherwise. The rule was that if the component and its advertised image looked expensive, then it must sound good as well. (Not least of the distractions the audio community has suffered was the switch from analog to digital, which led to such manifestly preposterous notions as “digital ready” speakers and amplifiers, as well as a nearly successful campaign to re-write the definition -as well as the experience -of the term “dynamic”). As far as we know, there has been no rigorous critique of the critical methodology long in place, a method which we believe has contributed to the audio hell in which most of us find ourselves. None of the current methods now in favour; measurements and specifications, blind tests, double-blind tests, boogie factors, or comparisons to “real” music, have been definitive. Nor has there been a serious alternative offered which categorically presents an orderly, reasonably conclusive methodology by which we can evaluate our components and playback systems. This is exactly what we propose in this essay.We believe that the basic reason why so many consumers are in AUDIO HELL, or on their way, is that they are confused about what should be the objective of their audio system, and therefore have adopted a method for the evaluation of audio components which often turns out to be counter-productive. If you agree that the goal of your audio system should be to involve us emotionally, physiologically and intellectually with a musical performance, then we would like to suggest the following description for its objective:An ideal audio system should recreate an exact acoustical analog of the recorded program.If so, then it would be very useful if we had meaningful knowledge of exactly what is encoded on our recordings. Unfortunately, such is not possible. (This assertion may appear casually stated, but on its truth much depends on the following argument; we therefore invite the closest possible scrutiny.) Even if we were present at every recording session, we would have no way of interpreting the electrical information which feeds through the microphones to the master tape -let alone to the resulting CD or LP -into a sensory experience against which we could evaluate a given audio system. Even if we were present at playback sessions through the engineer’s monitoring (read: “presumed reference”) system, we would be unable to transfer that experience to any other system evaluation. And even if we could hold the impression of that monitoring experience in our minds and account for venue variables, such knowledge would turn out to be irrelevant in determining system or component accuracy since the monitoring equipment could not have been accurate in the first place. (More about this shortly.) But if this is true, how can we properly evaluate the relative accuracy of any playback system or component?The old method: comparison by reference We should begin by examining the method in current favour: The usual procedure is to use one or more favoured recordings and, playing slices of them on two different systems (or the same system alternating two components, which amounts to the same thing); and then deciding which system (or component) you like better, or which one more closely matches your belief about some internalized reference, or which one “tells you more” about the music on the recording. It won’t work! … not event if you use a dozen recordings of presumed pedigree … not even if you compare the stage size frequency range, transient response, tonal correctness, instrument placement, clarity of test, etc. -not even if you compare your memory of your emotional response with one system to that of another -it makes little difference. The practical result will be the same: What you will learn is which system (or component) more closely matches your prejudice about the way a given recording oughtto sound. And since neither the recordings nor the components we use are accurate to begin with, then this method cannot tell us which system is more accurate! It is methodological treason to evaluate something for accuracy against a reference with tools which are inaccurate -not least of which is our memory of acoustical data. Therefore, it is very likely to the point of certainty that a positive response to a system using this method is the result of a pleasing complementarity between recording, play backsystem, experience, memory, and expectation; all of which is very unlikely to be duplicated due to the extraordinarily wide variation which exists in recording method and manufacture. (Ask yourself, when you come across a component or system which plays many of your “reference” recordings well, if it also plays all your recordings well. The answer is probably “no;” and the explanation we usually offer puts the blame on the other recordings, not the playback system. And, no, we’re not going to argue that all recordings are good; but that all recordings are much better than you have let yourself believe). Recognising that many will consider these statements as audiophile heresy; we urge you to keep in mind our mutual objective: to prevent boredom and frustration, and to keep our interest in upgrading our playback system enjoyable and on track. To this end it becomes necessary that we lay aside our need to have verified in our methodology beliefs about the way our recordings and playback systems ought to sound. As we shall see, marriage to such beliefs practically guarantees us passage to AUDIO HELL. It is our contention that,while nothing in the recording or playback chain is accurate, accuracy is the only worthwhile objective; for when playback is as accurate as possible, the chances for maximum recovery of the recorded program is greatest; and when we have as much of that recording to hand -or to ear -then we have the greatest chance for an intimate experience with the recorded performance. It only remains to describe a methodology which improves that likelihood. (This follows shortly). Listeners claiming an inside track by virtue of having attended the recording session are really responding to other, perhaps unconscious, clues when they report significant similarities between recording session and playback. As previously asserted, no-one can possibly know in any meaningful way what is on the master tape or the resulting software, even if they auditioned the playback through the engineer’s “reference” monitoring system. Anyone who thinks that there exists some “reference” playback system that sounds just like the live event simply isn’t paying attention; or at best doesn’t understand how magic works. After all, if it weren’t for the power of suggestion, hi-fi would have been denounced decades ago as a fraud. Remember those experiments put on by various hi-fi promoters in the fifties in which most of the audience “thought” they were listening to a live performance until the drawing of the curtain revealed the Wizard up to his usual tricks. The truth is the audience “thought” no such thing; they merely went along for the ride without giving what they were hearing any critical thought at all. It is the nature of our psychology to believe what we see and to “hear” what we expect to hear. Only cynics and paranoids point out fallibility when everyone else is having a good time.Another relevant misunderstanding involves the correct function of “monitoring equipment”. The purpose of such equipment is to get an idea of how whatever is being recorded will play back on a known system and then to make adjustments in recording procedure. It should never be understood by either the recording producer or the buyer that the monitoring system is either definitive or accurate,even though the engineer makes all sorts of placement and equipment decisions based on what their monitoring playback reveals. They have to use something, after all; and the best recording companies go to great lengths to make use of monitoring equipment that tells them as much as possible about what they are doing. But no matter what monitoring components are used, they can never be the last word on the subject; and it is entirely possible to achieve more realistic results with a totally different playback system, for example, a more accurate one. Notice “more accurate,” not “accurate.” It bears repeating that there is no such thing as an accurate system, nor an accurate component, nor an accurate recording. Yet as axiomatic as any audiophile believes these assertions to be, they are instantly forgotten the moment we begin a critical audition.The proposed method: Comparison by contrast.When auditioning only two playback systems using the usual method, we will have at least a 50% chance of choosing the one which is more accurate. However, evaluations of single components willy-nilly test the entire playback chain; therefore efforts to choose the more accurate component are compounded by the likelihood that we will be equally uncertain as to the accuracy of each of the system’s associated components if for no other reason than that they were chosen by a method which only guarantees prejudice. How can we have any confidence that having chosen one component by such a method that its presence in the system won’t mislead us when evaluating other components in the playback chain, present or future?The way to sort out which system or component is more accurate is to invert the test. Instead of comparing a handful of recordings -presumed to be definitive -on two different systems to determine which one coincides with our present feeling about the way that music ought to sound, play a larger number of recordings of vastly different styles and recording technique on two different systems to hear which system reveals more differences between the recordings. This is a procedure which anyone with ears can make use of, but requires letting go of some of our favoured practices and prejudices.In more detail, it would go something like this: Line up about two dozen recordings of different kinds of music -pop vocal, orchestral, jazz, chamber music, folk, rock, opera, piano -music you like, but recordings of which you are unfamiliar. (It is very important to avoid your favourite “test” recordings, presuming that they will tell you what you need to know about some performance parameter or other, because doing so will likely only serve to confirm or deny an expectation based on prior “performances” you have heard on other systems or components. More later.) First with one system and then the other, play through complete numbers from all of these in one sitting. (The two systems may be entirely different or have only one variable such as cables, amplifier, or speaker). The more accurate system is the one which reproduces more differences -more contrast between the various program sources.To suggest a simplified example, imagine a 1940’s wind-up phonograph playing recordings of Al Jolson singing “Swanee” and The Philadelphia Orchestra playing Beethoven. The playback from these recordings will sound more alike than LP versions of these very recordings played back through a reasonably good modern audio system. Correct? What we’re after is a playback system which maximizes those differences. Some orchestral recordings, for example, will present stages beyond the confines of the speaker borders, others tend to gather between the speakers; some will seem to articulate instruments in space; others present them in a mass as ifperceived from a balcony; some will present the winds recessed deep into the orchestra; others up front; some will overwhelm us with a bass drum of tremendous power; others barely distinguish between the character of timpani and bass drum. In respect to our critical evaluation process, it is of absolutely no consequence that these differences may have resulted from performing style or recording methodology and manufacture, or that they may have completely misrepresented the actual live event. Therefore, when comparing two speaker systems, it would be a mistake to assume that the one which always presents a gigantic stage well beyond the confines of the speakers, for example, is more accurate. You might like -even prefer -what the system does to staging, but the other speaker, because it is realizing differences between recordings, is very likely more accurate; and in respect to all the other variables from recording to recording, may turn out to be more revealing of the performance.Some pop vocal recordings present us with resonant voices, others dry; some as part of the instrumental texture, others envelope us leaving the accompanying instruments and vocals well in the background; some are nasal, some gravelly, some metallic, others warm. The “Comparisonby Reference” method would have us respond positively to that playback system, together with the associated “reference” recording, that achieves a pre-conceived notion of how the vocal is presented and how it sounds in relation to the instruments in regard to such parameters as relative size, shape, level, weight, definition, et al. Over time, we find ourselves preferring a particular presentation of pop vocal (or orchestral balance, or rock thwack, or jazz intimacy, or piano percussiveness -you name it) and infer a correctness when approximated by certain recordings. We then compound our mistake by raising these recordings to reference status (pace Prof. Johnson), and then seek this “correct” presentation from every system we later evaluate; and if it isn’t there, we are likely to dismiss that system as incorrect. The problem is that since neither recording nor playback system was accurate to begin with, the expectation that later systems should comply is dangerous. In fact, if their presentations are consistently similar, then they must be inaccurate by definition simply because either by default or intention no two recordings are exactly similar. And while there are other important criteria which any satisfactory audio component or system must satisfy -absence of fatigue being one of the most essential -very little is not subsumed by the new method of comparison offered here.The Hell of Conformity The methodology of Comparison by Reference will necessarily result in an audio system which imbues a sameness, a sonic signature of sorts, that ultimately leads to the boredom which illuminates AUDIO HELL. The explanation for this lies in the fact that there are qualitative differences from recording to recording -regardless of the style of music -which have the potential to be realized or not, depending on the capability of the playback system. (This is one of the undisputed areas where the superiority of LP to CD is evident, in that there is an un-measurable, but clearly audible, sameness -a sonic conformity of sorts -from CD to CD which does not persist to a similar degree with LP).A significant part of the attraction to CD is its conformity to an amusical sense of perfection and repeatability: no mistakes in performance and a combined recording and playback “noise” lower than the ambient noise existing in any acoustical environment where real music is enjoyed. (This should not be taken as a “sour grapes” apology for LP surface noise.) We all know listeners whose entire attention in the audio system evaluation is directed to the presence of noise or the need for absolute sameness from playback to playback rather than on the playback of music. Their common complaint is “this recording didn’t sound that way the last time I heard it.” Have you ever considered that the search for perfection and the need for conformity are head and tail of the same coin, doubtless minted in the worst part of our human character? It remains only for us to be aware of how these “virtues” operate on us, how we are used by them, andin turn make ourselves into something that much less human. (Star Trekhas been addressing these issues since the First Generation.) Perhaps civilization’s greatest enemy is not war, disease, or stress, after all; it’s boredom! This is why we must take the time from our daily routines to relax and reinvigorate ourselves by listening (for those of us not talented enough to play) to music. For this to happen effectively, the playback equipment must ensure the individuality of each recording. Otherwise, boredom -a very close relation to conformity and a direct descendant of colourised, sanitized, sound -will result. This stuff is as subtle as it is insidious; it will always be there for us to grapple with; and we must or we will end up like the tranquilizing acoustic wallpaper much of our music is rapidly becoming … or worse.Encouragement Required Qualitative differences are easily ignored if our methodology and goal is to achieve an identity with a reference; and our habit of listening for similarities with a reference will make for some awkward moments as we trek out trying to sort out matters of contrast. The latter requires a much broader attention span and invites every conceivable intellectual and emotional connection we can make with not just one ortwo recordings but many, and not just with their analogous counterparts in genre but with a range of wildly different styles, venues, and recording method.When our attention is directed to similarities [between that which is under evaluation and another system, or our memory of a live music reference, or of the “best-ever” audio], we naturally focus on vertical (frequency domain) or static (staging) determinants. But the sonic signature of sameness is not only to be found in the frequency domain, which is where we usually think of looking for it and wherein we try to sort out tonal correctness, but in the time domain, where dynamic contrast lives. When our attention is directed to contrasts, we are more likely to focus on musical flow, dynamic resolution,and instrumental and vocal interplay. When we compare for what we take to be tonal correctness using the Comparison by Reference method, we will end up with results not likely to have been on the recording, but rather the effect of the complementarity referred to earlier. When a system is found wanting because it does not uniformly reproduce large stages or warm voices, we will end up with a system which will compromise other aspects of accuracy, for not all recordings are capable in themselves of reproducing large stages or warm voices. When a playback system can reproduce gigantic stages or warm voices from some recordings and flat, constrained stages or cool voices from others, it follows that such a system is not getting in the way of those characteristics.Using this method of evaluation takes some time, and some getting used to; but then we audiophiles have been known to spend hours sorting out the benefits or damage caused by AC conditioners or isolation devices. More to the point, after the 2 or 3 hours it takes to compare any two components by this method, we will have ruled out one of them, permanently! And if we find that neither is the decisive winner, then we can probably conclude that they are both sufficiently inaccurate as to exclude either from further consideration. In other words, we now have a method by which we can guarantee the correct direction of upgrade toward a more accurate system.Detail and Resolution We’d like to briefly examine one of the more interesting misperceptions commonto audio critique. Many listeners speak of a playback system’s revolving power in terms of its ability to articulate detail, i.e. previous unnoticed phenomena. However, it is more likely that what these listeners are responding to when they say such-and-such has more “detail” is: unconnected micro-events in the frequency and time domains. (These are events that, if they were properly connected, would have realized the correct presentation of harmonic structure, attack, and legato.) Because these events areof incredibly short duration and because there is absolutely no analog to such events in the natural world and are now being revealed to them by the sheer excellence of their audio, these listeners believe that they are hearing something for the first time, which they are! And largely because of this, they are more easily misled into a belief that what they are hearing is relevant and correct. The matter is aided and abetted by the apparentness of the perception. These “details” are undeniably there; it isonly their meaning which has become subverted. The truth is that we only perceive such “detail” from an audio playback system; but never in a live musical performance.”Resolution” on the other hand is the effect produced when these micro-events are connected … in other words, when the events are so small that detail is unperceivable. When these events are correctly connected, we experience a more accurate sense of a musical performance. This is not unlike the way in which we perceive the difference between video and film. Video would seem to have more detail, more apparent individual visual events; but film obviously has greater resolution. If it weren’t for the fact that detail in video is made up of such large particles as compared to the micro-events which exist in audio, we might not have been misled about the term “detail”, and would have called it by its proper name, which is “grain”. Grain creates the perception of more events, particularly in the treble region, because they are made to stand out from the musical texture in an unnaturally highlighted form. In true high-resolution audio systems, grain disappears and is replaced by a seamless flow of connected musical happenings. [cf. “As Time Goes By”Positive Feedback Magazine, Vol. 4, No. 4-5, Fall ’93]. DevelopmentReturning to our suggested methodology -let’s call it “Comparison by Contrast” -we strongly urge resisting the reflex to compare two systems using a single recording. This may require a few practice sessions comparing collections of recordings until you have been purged of the A/B habit, which tends to foster vertical rather than linear attention to the music. If you listen analytically to brief segments of music, switching back and forth, there is no possible way to get a sense of its flow and purpose in purely musical terms. Music and its performance (which are or ought to be inseparable) are very much about the development of expectations which are subsequently prolonged or denied. It is not possible to respond to this aspect of music as an A/B comparison and it may come as a surprise that an ability to convey this very quality of musical drama is the single most important distinguishing characteristic of audio systems or components.By using the Comparison by Contrast method of evaluating components, we have in place a reliable procedure for sorting out the rest of the playback chain even in a pre-existing system whose components have not yet been put to the same test. Once you have ruled in a competent as being more accurate, it will fall out that some aspect of the sound will be less than completely satisfactory, simply because the more accurate the component, the more revealing of the entire playback chain whose errors become more apparent. The next step is to pick a component of a different function in the system -it is usually easier and more revealing to work from the source -and repeat the Comparison by Contrast method for each component in turn. This includes cables, line conditioners, RF filters, isolation devices, etc., as well as amplifiers, speakers, and source components. The methodology of Comparison by Reference leaves us without a clue as to how to proceed when the inevitable boredomand frustration resulting from its compromises set in. The Comparison by Contrast method, which also results in compromise as any audio system must, will always offer more hints of a live performance -for this is what is usually recorded -since it has enabled us to get closer to the recording. And as more components are substituted using Comparison by Contrast, the result will always be positive in greater proportion to Comparison by Reference. By the way, a delightful outcome of continuing to advance your system by the Contrast method is that you will not only be required to broaden your supply of hitherto unfamiliar recordings to comply with the method, you will also find that your own library is already replete with recordings whose sonics are much better than you had previously given credit. In this way, you will not only become better acquainted with a hitherto back-shelved portion of your collection, you will discover how much more exciting music is immediately available to you; and voirla AUDIO HEAVEN.The false prophet which diverts many audiophiles from the road to AUDIO HEAVEN is the notion that their audio system ought to portray each type of music in a certain way regardless of the recording methodology. An accurate playback system plays back the music as it was recorded onto the specific disc or LP being played; it does not re-interpret this information to coincide with some prejudice about the way music ought to sound through an audio system. (This explains why many people think that some speakers are especially suitable for rock and others for classical; if so, both are inaccurate.) To put it another way, you can’t turn a toad into a prince without having turned some rabbits into rats.Only if your audio system is designed to be as accurate as possible -that is, only if it is dedicated to high contrast reproduction -can it hope to recover the uniqueness of any recorded musical performance. Only then can it possibly achieve for the listener an emotional connection with any and every recording-no matter the instrumental or vocal medium and no matter the message. Boredom and frustration are the inevitable alternatives. Think about it.Leonard Norwitz THE AUDIO NOTE CO. (USA)San Jose, California January -April 1993Peter QvortrupAudio Note (UK) Ltd.Brighton, England August -December 1993(Revised by L. Norwitz for the present edition from the published essays under the same title in Positive Feedback Magazine, December, January and February 1994).