Архив Форума Hi-Fi.ru
По 23-5-2020
Портал Hi-Fi.ru более не предоставляет возможностей и сервисов по общению пользователей


Страницы: 1 2 3 4 След.

Джиттер в оптике

 
 
В соседней и дружественной :)) ветке я выложил графики джиттера для оптики .

http://hi-fi.ru/forum/forum52/topic67398/?PAGEN_12=2



 Легко видеть , что даже в самом лучшем слушае уровень джиттера составляет около 1000 пс . Это -примерно 85 дБ ДД . Мне всегда нравились разговоры про очень фирменные :)) оптические шнурки .Типа ; Я КУПИЛ - и вдруг раскрылось и поперло :)).

  Мораль той басни проста :)). Цапчик с подавлением джиттера не роскошь , а необходимость. Катафонический цапчик джиттер подавляет , а ваш ?
 
 
для самых ленивых продублирую график и тут .
 
 
Вот и мне кажется: это потери (в децибеллах, по горизонтали) уровня цифрового сигнала в 1метре оптического кабеля (двух типов) при разном уровне джиттера (он по вертикали). Чем больше джиттер, тем больше и потери. Но причём тут ДД аналогового сигнала?
 
 
:!:    Взгляд APOGEE на "цифру":

The Difference Between Good Old Analog And Digital Audio
Sound is transmitted through air as movement of individual air molecules. A microphone turns this movement
of air into a changing voltage which represents the air movement. This changing voltage is called an analog of
the air movement. Sound analogs can also be mechanical, such as a phonograph groove, electrical current, mag-
netic field, optical energy, or any continuously varying representation.
Digital audio uses numbers to represent sound. These numbers have to be big enough to capture the smallest
and biggest details in sounds – accurately. The same numbers also need to be changed fast enough so our ear
is not aware of them stepping by. You are probably aware that cartoons consist of a sequence of individual
drawings changing fast enough to give the illusion of motion. If we slow the sequence of drawings down, the
image starts to flicker like the old movies and motion becomes jerky.
To fool our eyes into seeing fluid motion, the images need to change from one to the next at least 25 times per
second. There are some motion picture systems – such as the one from Showscan in Culver City, CA – that
increase the rate to 60 per second, resulting in an amazingly grain-less and fluid motion.
The frozen visual images of individual movie frames are analogous to the individual numbers of digital audio.
Our ear doesn’t get fooled into thinking that these numbers sound real until they change at around 32,000 times
a second. The individual numbers are called samplesand represent audio in narrow slivers of time. The rate
these frozen slices of audio change per second is called the sample rate.
You will often see sample rates represented as kHz or kiloHertz (k = one thousand; Hz = cycles/times per sec-
ond). A sample rate of 32 kHz (32 thousand samples per second) is used in digital broadcasting applications.
Compact Discs use a 44.1 kHz sample rate (44,100 samples per second). These individual samples are different
to the musical instrument or vocal samples used in assembling music tracks. Sound samples are made up from
strings of the individual “slices of time” samples much as a video clip is a sequence of individual video frames.
You can see it takes a lot of numbers in the digital world to represent an analog version of the same sound. An
analog signal path may need a frequency response of 100 kHz to faithfully reproduce 20 kHz audio. A digital
signal path for the same 20 kHz audio requires a frequency response of several million Hertz (Megahertz or
MHz). Bandwidth is a measure of the lowest to the highest frequency a path can handle. The wide bandwidth
required for digital audio is due to the way the individual numbers are transmitted across an interconnect. There
are a number of different methods of making digital audio connections inside equipment and externally to other
devices.

It’s All In The Timing
A drummer’s timing can make the difference between good music and a memorable hit. Digital audio, likewise,
needs good timing to make it from one place to another with uncompromised sound quality. The timing in the
interconnect is used to unscramble all the bits for accurate recovery of the exact samples transmitted. The tim-
ing also needs to be very regular.
Timing jitter is any irregularity in the timing passed across an interconnect. If the samples become messed up
in the interconnect, the effects are usually very audible, varying from occasional clicks to a loud, harsh fuzz.
Timing jitter can cause more subtle effects. In digital to analog converters for example, the location of instru-
ments across the audio sound stage can become less focused. Note: A “sound stage” is the mental picture you
form when you listen to a piece of music and localize the various instruments and vocals as if they were on stage
in front of you (closing your eyes can help form the image). A well defined sound stage has width, depth, and
focused locations all defined by subtle reflections, reverb tails and tonal quality in a stereo mix.

These Interconnects Sound Different!
You may have heard critical digital audio listeners complain “if digital audio is so perfect, then how come it
sounds different when I use different interconnects?” Some experts will tell them it must be their imagination
because if the numbers are sent correctly on each interconnect they both must sound the same. That makes
sense, but it’s only part of the story…
When a digital to analog converter receives the samples from an interconnect, it must also extract the timing
information and regenerate its own timing “clock”. A good analogy is a drummer playing to a click track. If the
drummer is good, he can nail the basic tempo of the click and add in faster patterns of his own, such as a six-
teenth-note high hat. When digital devices receive the clock from an interconnect, they lock up to the sample
rate tempo and add faster multiples many times higher than the drummers sixteenth-note example. Now imag-
ine what would happen to the drummer’s playing if we put slight, random variations in his click track reference.
The drummer would try to follow the changing tempo but because the changes were unpredictable, he would
overshoot the click tempo as it moved up and down. The random click track variations around a perfectly steady
tempo could be called tempo jitter. The poor drummer ends up with worse jitter in his timing unless he can
ignore the small changes and play to the average.
The problem of interconnects affecting the sound can be traced to jitter in the timing of the digital to analog
playback. Each time digital audio timing is passed through additional circuits, it picks up slight variations around
the original perfect timing. The amount of timing jitter added through successive stages depends on the type
of circuits. Inside products, different computer logic families used for digital calculations add varying amounts
of jitter. Noise on power supplies and grounds, nearby clocks with similar harmonics, AC power and external
interference can all add jitter to perfect timing. Some of it is random and some has specific frequency content.
When the internal timing is passed to another device over an interconnect, different types of connections add
more or less jitter. A short AES/EBU connection over high quality digital audio cable – such as Apogee’s Wyde
Eye110ΩAES/EBU cable – will pick up less jitter than the same signal run through a length of microphone
cable, XLR connectors and patch bays. A S/PDIF coaxial wire connection (especially one made with Wyde Eye
75Ωcable) will be cleaner than the consumer “TOSLINK” optical version, at least partially because of the slow-
er response time of the optical transmitter and receiver.
When the circuits in digital to analog converters (D/A’s) recover the timing, they are often negatively influenced
by the jitter picked up along the way, much like our miserable drummer trying to follow the varying click track.
When the recovered timing starts to wobble around as it tries to track the jittery input, it modulates the ana-
log sound coming out of D/As, causing all sorts of subtle negative effects such as changes in the stereo image
and tonal quality. An interesting source of jitter in AES/EBU digital interconnects is due to the changing sam-
ples and subcode information. A 1kHz digital audio tone causes 1kHz jitter.
Different interconnects do not sound different if the timing circuits of the reference D/A are designed to ignore
any jitter and the samples are correctly transmitted. Manufacturers can claim low jitter circuitry – although it’s
only a relative claim, as at the moment there are no accepted standards for jitter measurement for digital audio.
Jitter also has a big influence on the quality of analog to digital converters with very similar side effects, which
unfortunately are there forever after.

AES/EBU Interface
AES/EBU, AES3-1985, ANSI S4.40-1985, AES3-1992, EBU Tech.3250- E.CCIR Rec.647 (1986), CCIR Rec.647
(1990) Confused? Well, don’t be. These are different standards are lumped together and called AES/EBU, the
connection designed to standardize plugging one digital box to another. AES is the Audio Engineering Society
and EBU is the European Broadcasting Union. These organizations and others have worked very hard to bring
us a standard method of sending professional digital audio across a single interconnect with maximum com-
patibility. Generally the approach work well as long as the potential weaknesses are kept in mind when string-
ing things together. A better understanding of how two channels of digital audio flow across a single connec-
tion helps highlight the pitfalls.
Electrically, the AES/EBU signal is tailored to use microphone-type cable, although in fact the bandwidth is a
good deal wider than regular mic cable can handle successfully. Microphone cable normally carries analog audio
on a twisted pair of wires enclosed in an outer metal shield. The shield is usually a continuous, flexible braided
wire jacket or in applications where flex is unnecessary, a metal foil wrap is often used (inside patch bays and
consoles for example). The shield provides a ground connection and reduces the influence of outside electrical
interference on the two wires carrying the audio. Two wires are used instead of one to further reduce the effects
of outside interference. Because the two wires are twisted together, they follow almost exactly the same path.
Any interference managing to make it through the tubular shaped shield tends to affect both wires equally. An
example would be running the microphone cable alongside a power transformer. The magnetic energy radiat-
ed from the transformer causes the two wires to develop the same AC mains related hum voltage. If the two
wires were driven into a transformer, this hum voltage would not come out the other side of the transformer
because both wires have the same voltage at any moment due to the hum. For the transformer to give any out-
put, there must be a voltage difference between the two wires. The transformer input is called differential
because the analog audio is carried as the voltage difference between the two wires. The noise signals picked
up along the way are called common mode inputsand the ability of the transformer to ignore them is rated as
common mode rejection. In professional audio we call differential inputs and outputs balancedand because
transformers are bulky and expensive, they are outnumbered in modern equipment by their more economical
electronic equivalent: electronically balanced inputs and outputs.
As compared to other digital formats which rely on multiple interconnects for clock, left and right data,
AES/EBU simplifies the cable connections and uses readily available wire interconnects that are already in use
at most professional and semi-professional facilities.
A single line connection of stereo digital audio must transfer a string of data packages containing left and right
audio samples repeated at the sample rate. One package is referred to as a frame. The single line AES/EBU
interconnect divides each package into 64 little pieces of binary bits with 32 for the left sample and 32 for the
right. Each chunk of 32 bits is called a subframe. To make it easy to recover the data on the receiving end, each
bit is further divided in two. Patterns of full bits and half bits are coded to indicate whether the bits represent
one binary state or another, often referred to as zeros and ones. In some older multi-line interfaces, the loca-
tion of the beginning of samples is marked with a separate word clock line. To find the beginning of the left and
right samples in the AES/EBU format, each 32-bit subframe includes a unique pattern of half bits and at least
one delay equal to one full and one half bit joined together. Receiver circuits can recognize the longer one and
a half sync bit and use it to extract the left/right synchronizing information for sample decoding and word clock
separation.
The audio samples can be up to 24 bits long and the sync pattern uses four more bits. With 32 bits available,
there are four extra bits left to send more information. Digital audio samples must change very quickly where-
as other information can be updated at a slower rate. For example, emphasis is usually selected at the begin-
ning of a session and remains on or off, so updating the emphasis status 44,100 times a second would be redun-
dant. The AES/EBU interconnect takes two bits of each subframe and calls them user data and channel status
bits. To pack more information into the one channel status bit location, 192 bits are sent sequentially, one bit at
a time. These 192 bits can represent vast amounts of data at a slower rate than the one bit alone. The begin-
ning of one of these sequences is marked with a special sync pattern in place of the normal sync pattern for a
left sample. At the receiving end, the status bit is picked off at every frame and assembled one at a time into
a string 192 bits long. The collection of 192 bits repeats every 230 time a second for a 44.1 kHz sampling rate.
The status bits can represent controls for a variety of important data. Sample Rate, Emphasis and Copy pro-
tection are represented. Even control of redundancy checking is implemented. Bits for ‘indexing’ are support-
ed. Identification of professional or consumer format is also indicated.
 
 
собственно совершенно  непонятно почему вообще кто-то может считать оптический интерфейс в аспекте джиттера лучшим , чем электрический :o
ведь для получения оптического сигнала  и его приема добавляются как минимум 2 дополнительные ступени преобразования сначапла из электрического в оптический потом из оптического в электрический.... естественно эти дополнительные преоразования могут только ухудшить дело с джиттером, а никак не улучшают...
 
 
Так ведь никто и не считатет, что оптика лучше...  :)
 
 
Нет-нет, коллеги, вы очччень неправы.

Дело в том, что когда цифровая схема не считывает 100 или 1000 от/с/четов, то это воспринимается как щелчок и называется Высокое Нелинейное Искажение.

А вот если схема плоховато (с 10-й попытки) кое-как синхронизировала 1-2 дрожащих фронта -- это уже потеря детальности и воздуха.
 
 
Гена, это не цитаты из научной статьи, а всего лишь кусочки популярных пояснений про "цифру" (в основном для музыкантов) из апогеевского мануала на 1000-ю серию.  :D
 
 
--------Но причём тут ДД аналогового сигнала?

 Джиттер - это паразитная модуляция полезного сигнала . От уровня джиттера зависит уровень боковых паразитных полос - те помеха . Для джиттера с уровнем 1000 пС все , что ниже -85 дб будет уже на уровне помех . Соотв реальное разрешение будет соот примерно 12 разрядам .
 
 
Цитата
Nag  пишет:
Нет-нет, коллеги, вы очччень неправы.

Дело в том, что когда цифровая схема не считывает 100 или 1000 от/с/четов, то это воспринимается как щелчок и называется Высокое Нелинейное Искажение.

А вот если схема плоховато (с 10-й попытки) кое-как синхронизировала 1-2 дрожащих фронта -- это уже потеря детальности и воздуха.

 Ваш юмор понятен , вот только за знание матчасти вынужден поставить двойку:)).

 Щелчок обычно возникает после 8ми подряд идущих интерполяционных отсчетов .После 8ми ошибок у разработчиков транспортов просыпается совесть и они вкл MUTE. А не после 100 и тем более 1000 отсчетов :)).
  Я бы мог много порассказать про транспорты . Но тут - другая тема.


2 Гена

 Понимаешь , Гена - тут суть не в разнице уровня джиттера от уровня сигнала ( те длины кабеля ) а в том , для даже на больших уровнях сигнала джиттера много . Очень много .
  И несмотря на это  - есть любители рассказывать - как у них НУ ОЧЕНЬ ФИРМЕННЫЙ  оптик поет :)).

Гена , на сайте audioprecision.com  есть много полезных статеек

http://www.audioprecision.com/download/whitepapers
Страницы: 1 2 3 4 След.
Архив Форума Hi-Fi.ru
По 23-5-2020
Портал Hi-Fi.ru более не предоставляет возможностей и сервисов по общению пользователей

1997—2024 © Hi-Fi.ru (Лицензионное соглашение)