Determining Proper AC Polarity by Listening to Music

Every component in an audio system is sensitive to AC polarity. Ensuring that your electronics are connected to the AC line with the correct polarity is essential if you want to realize the full potential of your system.

Some believe Correct AC Polarity is related to lower Chassis Voltage when the ground lifted but There is no need to measure the Chassis Voltage (to Ground Potential) because in some rare instances, the higher reading Voltage will produce better sound from the component. the best way is listening and detecting the Correct AC Polarity.

Plug the component into the wall socket and turn on the power switch. Listen to Music, then Unplug the component from the wall socket, Reverse the position of the plug in the wall socket and repeat Listening.
The correct A/C alignment will be the one that gave the best sound.

The Art of Rational Speaker Placement Setup Guide By Bob Robbins

Please Watch this Video :

https://www.youtube.com/watch?v=84Pf0ycbyBM

you can find more information at the following link https://www.myspeakersetup.com/

What Can Bob Do

If you aren’t sure  how to set up your speakers, and you want the best sound quality you can get out of them, then give strong consideration to  hiring Bob. He is the expert in The Rational Speaker Placement process. He will place and position your speakers in the ideal location in your room for sonic excellence, resulting in maximized performance of not just your speakers, but your entire hi-fi system.

Peter Qvortrup: High Fidelity, the Decline of the Decades

This is an article that first appeared in our new online PDF, downloadable magazine The Occasional last fall in it’s inaugural edition. We’ll be rolling out articles from it over the next week in anticipation of our upcoming second issue which is scheduled for publication February 3rd. We hope you enjoy this new, exclusive content, and that you’ll check out the Winter Edition of The Occasional when it drops 140 pages of fresh high fidelity reviews, audiophile gear highlights, lifestyle stories, and editorial  opinion.

–Rafe Arnott

By Peter Qvortrup

Since the dawn of time music has played an important part in human life, whether at the top or bottom of society, people have participated, and listened to music in its many forms as it has developed through countless generations to the present day.

Instruments have developed to allow increasingly complex, and expressive music forms until the peak was reached sometime in the latter part of the 19th century, coinciding with Thomas Alva Edison’s invention of recorded sound. Before successfully inventing recorded sound, Edison must have arrived at a fundamental realization that sound can be entirely characterized in two dimensions.  His first cylindrical recording was nothing more than a rough approximation of the changes in amplitude pressure (caused by the modulation of his voice), plotted against a constant time base (generated by the steady turning of a crank).  Crude as his technique may have been, sound was recorded, and recognizably produced for the first time in history.

Thomas Edison, c.1877. Photo courtesy of the U.S. Library of Congress.

The limiting factor in Edison’s first experience was not his idea, but his hardware; further improvements could only come from refinements in technique.

So here we are, just over 120 years after the invention of sound recording, and how much further have we really got?

When looking back over the past century of audio developments it is interesting to note that improvements in technique have not happened simultaneously in all branches of the ‘tree’ that makes up the music reproduction chain, as we know it today. Rather it ‘jumps’ where one part has suddenly moved forward, only to leave other parts of the chain badly exposed.  I could mention several examples of this, but suffice to say that a very clear example was when we went from 78rpm records to 33rpm microgroove LPs. The replay hardware left a lot to be desired, and eventually this inadequacy led to a severe reduction in the quality of the software.

Progress is not a straight line and this situation has repeated itself numerous times over the history of recorded sound, and as a result, music recording, and its associated recording, and reproduction equipment, has fluctuated greatly, and peaked in several areas of performance only for the improvements/developments to be reversed due to the limitations of another branch or branches of the system. It is, in my opinion, of the utmost importance to study the historical development of each branch of the music reproduction chain, in order to ‘glean off’ the best (most interesting?) developments, and then resurrect them to compare the overall result with what is considered the ‘state of the art’ today. It makes for an interesting comparison, and teaches important lessons in why methods, and technologies were discarded – mostly, I am sorry to say – due to economic considerations rather than qualitative ones.

To try to visualize the argument, I have constructed this chart to attempt to demonstrate how I see the peaks, dips and troughs of development in the five main areas of the music reproduction chain. Firstly to demonstrate the relative lack of real progress in sound quality in absolute terms (so much is, and has been made of the High-End Audio industry’s obsession with “State-of-the-Art” technology, that this serves as a potent reminder of the numerous areas where current technology is anything but “State-of-the-Art”), in order to analyze why, historically speaking, developments took a turn for the worse. Remember this is very broad historical/empirical overview, not a detailed study. A book will be written on the subject if ever I find the time, so here are the five “branches” as I would define them.

To support, and understand the thesis it is important to stress that the positions of each branch represents what should be considered the best effort available at the time, not necessarily what was well known or prominently marketed. A view has also been taken on the ideas, and principles behind the equipment in question, so that advances in technique – when applied to the older ideas – have also been considered.

The points system is applied to simplify the overview, and allow for comparison between the decades in absolute terms, and as can be seen we have not moved forward since the 1950s. In fact, we have reversed most of the absolute peaks of development relating to the position of each branch wave on the above scale. It is fairly easy to see that the audio quality in its absolute form peaked around 1960, and that this was primarily due to the superior quality of the recording, and software quality, the decline since then has a number of explanations, some of which I will attempt to address in the rest of this essay.

With the benefit of hindsight both recording, and software technology reached a peak during the 1950s, so by early 1960s recording, and software quality peaked, which goes some way towards explaining why records, and recordings from this period are so highly desired, and prized by today’s music lovers, collectors, and audiophiles (don’t you just hate that word).

The addictive Long Play microgroove.

Relative to the quality standards of recording, and software manufacture, the replay equipment was at a very crude stage in its development at the time of the introduction of the microgroove mono LP in 1948/49, and developed very slowly until the early 1970s. In my estimation, it really only reached its peak about 1985, with the introduction of the Pink Triangle, and Voyd three-motor turntables, the Helius tonearm, the van den Hul diamond stylus shape, and Mr. Kondo’s Io cartridge with its titanium cantilever. It is therefore fairly easy to understand why record companies could reduce the quality of the LP-software (in many cases this was actually an “improvement” in the sense that you could now play the record without severe mistracking) without noticeable quality loss.  Anyone who has tried to play a Decca or RCA opera record like the Decca recorded RCA release of “The Force of Destiny,” with the Rome Opera conducted by Previtali, and di Stefano/Milanov on RCA will seriously wonder how this could possibly have been tracked by an average 1959 tonearm/cartridge combination. No wonder Decca had such a high return rate of their LPs at the time.

Amplification reached its peak earliest of all in the 1920s or early 1930s, and only by 1989/90 had it re-established or exceeded the quality level with the re-introduction of the single-ended triode amplifier (SET).  As a side note here, it has always amazed me that no magazine has ever made a challenge of the decades, where they compare what could be considered the best amplifier at the end of each decade, to see if we have indeed moved forward in absolute terms. I have done this comparison on several occasions which is one of the reasons why I decided to write this article.  I can tell it is a more educating experience than any review.

A similar comparison should be made between older, and newer models of different manufacturers products, to establish whether their claims to have actually moved forward, or whether (as might be suspected) their claims of continual progress ring hollow.

Loudspeaker technology was only invented in 1924, and is considered to have peaked in the late 1930s. It has to be remembered that loudspeaker technology is by far the most expensive audio technology to research, and develop, and that most of the really serious development took place in cinema sound, not in home music reproduction systems.  It is only with the benefit of hindsight that this becomes really obvious. To me, a sole loudspeaker product stood out in the 1980s: the Snell Type A/III. This is why speaker technology did not drag the result down even further.

Simple wooden boxes.

In reality, if you compare the very best products available during each decade from about 1930 on, very little progress has taken place. This is undoubtedly due to the widely disparate levels of development (or in some cases refinement?), in each of the branches of the “audio reproduction tree” at any given time, as well as increasingly commercial considerations regarding cost, finish, and appearance, as the audio industry started to aspire to commercialism in the late 1950s and 1960s.  Most of these later decisions have not benefited or furthered the goals of “Higher Fidelity.”

It is almost paradoxical that an improvement in one branch of the system has invariably been counteracted by a severe decline in another branch – or should I say “have allowed a reduction of quality to take place unnoticed in another branch” – thereby leaving a kind of “balance” which has meant that sound quality has not changed much over the past 30-40 years.

In order to try to understand why real development towards “High-Fidelity” has not progressed further than it has, the above historical diagram helps to visualise how I see the deterioration and improvement in quality that have happened since the introduction of recorded sound.

Our measurement technology has certainly contributed considerably to the slide in absolute quality, “the same for less” is the motto that has been applied time, and again when engineers have discussed measurements as a direct proof of sound quality. This approach invaded the industry in the 1940s, via David Theodore Nelson Williamson’s high-feedback amplifier design, and measurements used as a proof of better sound has become increasingly dominant to this day.

Every new technology that has been introduced has generally started its life just as another branch on the audio system “tree” has reached its peak, but there has always been another trough somewhere else to put the blame on when new technology did not provide the necessary (claimed) improvements in overall performance. Our belief in the idea of progress has most certainly supported this development.

It is almost certain that when the transition from triode to pentode was made, software quality can be at least partly blamed for the demise of the triode, the move from 78s to microgroove LPs helped conceal the real inferiority of the pentode. The result was the pentode being the victim when the transistor amplifier was introduced. This happened almost simultaneously with the introduction of stereo, a time when software, and recording quality was at its absolute peak, so the reduction in sonic quality that the transistor introduced was more than counterbalanced by the improvements in software quality.

As we approach modern times, the increasingly well developed “objectivist” technological dogma, combined with the growing post-war belief in the idea that progress is indeed a straight line, which, combined with better, and more clever marketing techniques originally used to “rally the troops” during the national emergencies of the 1930s, and 1940s helped create a shift in the market aimed at higher proliferation, and profitability to the detriment of absolute sound quality. To me, it is perfectly clear that the resources being invested in improving absolute quality had largely evaporated by 1965, as marketing men, and accountants took over the main budget in most commercially-successful companies, a fact that seems to have gone unnoticed by both the audio press, and buying public.

The oft-maligned CD.

Remember the excuses for the Compact Disc in the early days? “It is too accurate for the rest of the audio chain, and is therefore showing it up…” or some such nonsense. How about the current speaker/room interface debate? The room is the “last frontier.” Speakers would sound perfect if only the rooms they were used in were better.  The fact that most loudspeakers (whether claimed to be of high-end quality or not) are designed under near-field anechoic conditions compounds the issue in my mind. What use is there in designing under these conditions when most people sit at three-to-six times the distance used to measure speaker response? On top of that, who lives in rooms with an acoustic complimentary to an anechoic chamber? These little truths do not seem to have impacted the designers of modern loudspeakers at all.

The list of creative excuses for the lack of real research, and thus the questionable or downright incompetent performance of much of the so-called high quality equipment sold is endless, and a strongly contributing factor to the early fragmentation of the audio industry into many companies: Each specialising in one branch (amplifiers, speakers, cables, cartridges, etc.) which allowed a kind of collusion, where the fault for the poor performance of any piece of equipment could always be explained away by a lack of compatibility with the other equipment in whatever system was used to pass judgement. Synergy became the illusory goal of disillusioned music lovers through endless “upgrades” until most gave up.

Which is another item that has always made me wonder why reviewers, and hi-fi magazines have not applied a review process that reduces this problem. The solution is simple, you ask the manufacturer of whatever piece of equipment you want to review to submit a complete system within which he believes that the specific piece will perform at its best, and then review the overall result, together with a conclusion on how the reviewer (or reviewers) feel that the manufacturer has achieved his stated goals with the product in question. This would remove 90 per cent of manufacturer excuses, and give the reader (end consumer) a much better idea of what to choose.

If more time had been spent investigating the fundamental problems, and less time on constructing a great marketing story, and subsequent excuses (where necessary), perhaps we would have better, and more satisfying music reproduction equipment today.

Back once again: The Single-Ended Triode.

In 1948-49 the single most important negative development, after the abandonment of the single-ended triode, and the introduction of negative feedback – invented in 1928 by Telefunken – was the 1947 Williamson article in Wireless World that launched the foundations of the single most important theory now ruling audio design: Specifications as a measure of sonic quality.

This theory was quickly picked up by great marketers like Harold Leak, Peter Walker (of Quad), A.Stewart Hegeman, David Hafler and countless others.

This was, in my opinion, the most damaging single-theory to be imposed on audio design.  This suggestion that sound quality, and measured quality (as exemplified by distortion, bandwidth, and noise measurements as we know them) are directly related to sound quality became the most compelling theory going. Why? Because it is very simple, and its very simplicity makes it the most powerful marketing tool ever handed to the audio industry.  It provides the manufacturer with “proof” that his audio product is “better” than the competition.  What more could you want?

It has single-handedly created the ideological basis for thousands of minute, and incremental quality reductions. In part because it has made it possible to make products that measure the same at increasingly lower prices, often using technologies poached from other branches of electronics – which in the absence of any real research into music reproduction techniques – provides a powerful substitute, and excellent surrogate to feed an unsuspecting, and gullible public seeking a music system of quality. It is this public who can be easily fooled into believing that when two products measure the same then they ARE the same, and the buying decision then becomes a simple question of price. How brutally simple, and incredibly effective.

To oversample or not to oversample?

Digital Audio is the latest example of how the high-fidelity industry has distorted the concept of research, and improvement. Since the introduction of the Compact Disc in late 1982, the technology has entered into the usual numbers game. 20-bit is better than 16-bit, 96kHz is better than 44.1 kHz, 24-bit is better still, as is a 192kHz sampling rate, and so on. In addition to this, music is now becoming virtual, and very few customers actually know what level of quality they are downloading onto their computer. Claims from the file-providing services that their files are stored using the ultimate lossless, high-resolution codec, coupled with the convenience of a song at your very finger tips makes this path a rather compelling one for music consumers. And does all the necessary hardware, and software upgrades necessary to play back these new file codecs make manufacturers happy or what? Believe me when I say that a critical-listening session comparing high-resolution files to a well-recorded CD on a decent CD transport quickly dispels any claims of higher bitrates as just another marketing-based illusion.

Peter Qvortrup at home, May 2017.

Conclusion, Or Some Such

I started writing this discussion piece in the first half of the 1990s, and it has stayed on my computer for years, being taken out, and “brushed off” from time to time. The piece you have just read was intended as a “taster” to a much longer series of articles discussing in much more detail each of the five branches of audio, and music recording, and reproduction. I still intend to write these articles delving into each of these technologies, so perhaps we will meet again.

Peter Qvortrup

September 17, 2017

This Article Published In Part-Time Audiophile Magazine.

Audio Note AN-E SPe HE Loudspeaker

One of the Best Performance/Price High efficient Loudspeaker from Audio Note is AN-E SPe HE model.

The crossover is simple, essentially first order 6db. In the time domain both drive-units are connected with the same positive acoustic polarity
Impedance 6ohm
Sensitivity 97.5db

Amir Audio will Demo AN-E SPe HE in the Showroom as soon as Possible.

Eduardo de Lima (Audiopax Designer)

Eduardo de Lima started out in audio as a teenager, initially assembling his first speakers and then with solid state amplifiers and push-pull valves. Many years of experience in this field eventually led him to set up his first “single ended” using 300B valves, devices that had been designed around 1930 (early in the movie theater) and had recently been rediscovered by the Audio output. The sweeping musicality and naturalness of this strange amplifier surpassed any expectation. With his “orthodox” training as a designer in Electronic Engineering gratuaded from UFRJ (Federal University of Rio de Janeiro) and a Master’s degree in Electrical Engineering from Syracuse University in the United States (M.S.E.E.), Eduardo de Lima had a hard time accepting what he heard. The treble was limited and the bass slow and poorly controlled (characteristics known from the anachronistic 300B) but none of that mattered to him: voices and some instruments sounded real and captivating, the average range of frequencies was simply magic. Thus, Eduardo de Lima spent the following years completely dedicated to discovering the secret of this magic and how to get it without any loss at the end of the track.
In 1997 he founded AUDIOPAX, the company that would be the representation of his researche and dreams in this same year he presents his first pair of amplifier / speaker at an event dedicated to “VSAC” valves in Silverdale (USA). Also in 1997, he created the LM3 (Low Mu Triode with Higher Raw Efficiency Emulator – LMTHREE), a circuit that emulates the behavior of the 300B with simple replacement valves. This technology was used in the SE388 amplifiers, which together with the new CX305B boxes were presented at the Hi-Fi Show 98 in São Paulo (his first participation in a fair in Brazil and that later would be remembered as the “cellar room” and also in the Silverdale’s VSAC of 1998, with enthusiastic response in both cases.
After several other launches, in 2001 he designed a piece of equipment that would become a legend around the world: the Model 88, one of the most innovative amplifiers of all time, with innumerable inventions in concepts, topologies and circuits, among them the concept of “Timbre Lock®”, the only adjustment that adapts the distortion spectrum of the amplifier to that of the box connected to it. Model 88 has won countless awards, enthusiastic reviews and magazine covers: Stereophile Class A, Diamond Audio and Video category, Blue Moon Award at 6Moons, Hi-Fi News Industry Award and even today is still considered a reference among several reviewers worldwide.
Another absolute reference product was created in 2003: the Model 5 preamp, with numerous innovations and incorporating the same concept that had already been awarded in the Model 88 of Timbre Lock®.
He participated actively in national and international fairs between 2002 and 2006, always presenting preamplifiers, amplifiers and speakers with innovative projects. He obtained worldwide recognition and numerous quotes from “Best of the Show” from the world’s leading publications. From 2007 to 2010, he focused in the national market, with the creation of new products aimed at the entry market, in partnership with Lando. Launched a new generation of its electronics and the junction of the concepts of luthieria and engineering in its new series Of speakers.
In 2011 he launched his new amplifier: the Maggiore M100. Another revolutionary project that for the first time in the market is a single ended A1 class with up to 130W of power, a dream to all music lovers. In his first review Fernando Andrette, of the magazine Audio Video Magazine, states: “I claim to be Maggiore M100 the most musical amplifier I’ve listened to in my entire life”. In this same year he was also acclaimed in HiEnd 2011 – the largest audio fair in Brazil – and also returned to international fairs, participating in T.H.E. Show 2011 (event parallel to CES 2011) and the largest audio fair in the world: the 2011 Munich High End Show. The customary success was repeated: numerous citations as the best room in both fairs.
Eduardo de Lima at 54 years, was in his most creative phase when a problem in his aortic artery took him prematurely to his death. A few recent months before he had also designed new preamplifiers, a whole series of high-power hybrid amplifiers and two new speaker series. He also worked on revolutionary concepts, such as the definition of what really brings the recognized differential of valves and vinyls, something that would soon be applied to his next projects. Eduardo left a son, Lucas de Lima, who took over his share of the company and that will continue together with all the current team of Audiopax his father’s dream – a company that brings back all the emotion that we feel with music.