Read the related articles
×
Read more articles...

Sound Quality Explained By Expert

In different reviews we can meet sound quality estimation in "transparent", "air", "sand", "detailed" and other terms, that have no exact definition. In this article, read what is sound quality in easily understood terms.

updated

Author: Yuri Korzunov,
Audiophile Inventory's developer with 25+ year experience in digital signal processing,
author of the articles that make audio easy for beginners

 

Sound quality

 

Introduction

We all have different perception of music. And sound quality (audio quality) looks like unmeasurable concept like any other "quality". Because, finally, we estimate the quality inside our heads according to our individuality and skills.
Reproduction of live performances demands other quality concept.
In this article we explain what is sound quality and how to estimate it, including live-performance quality.

If you want to clear understand, what is sound quality actually, read the article to end.

 

What is sound quality

Sound quality is measure of an audio-hardware/software-unit goodness.

In general, the quality estimates accuracy and fidelity of sound at output of an audio unit: device or software.

When we compare two opinions about a quality, we havn't common reference point, because the mark is a feeling. And each person have own inner feeling "ruler", that may differ from other people.

So, should be methods to get quality estimation result, independently on individual inner scale.

If we have a sound of a source, an audio system should bring it our ears as is, without distortions. But, a distortions are there due to terchnical reasons.

Thus, we can just estimate the distortions.

However, human hearing may react variously for different distortion distribution by level and frequency. And this is subject of psychoacoustics.

To more exact estimation of distortions, they should be corrected according psychoacoustical features of humans.

 

Alternatively, we can ask a person, how it like sound? But we are not sure, that his/her opinion like our or other.

So, we can ask many persons to be more confident.

 

Therefore, we have 2 ways to estimate sound quality:

  • objective and
  • subjective.

Read article where these quality-assessment methods are compared...

 

How to measure sound quality

Objective

Number of parameters may be measured instrumentally to estimate sound quality. They estimate distortion level of test signals, that passed through a studied audio system.

Test audio signals

Sinusoid (sine)

Sinusoid is most popular test signal. It's simple in analyzis and universal.

Sine(Time) = Amplitude×sin(2π×Frequency×Time+InitialPhase)

 

Sine group

Group of sines is used to some kinds of analyzis, when we want to study an audio system response to complex signal (dynamic range abilities).

SineGroup(Time) = Sine1(Time)+Sine2(Time)+...+SineN(Time)

 

Sweep sine

Sweep sine is sine that growth with time. One of most popular signal to study frequency response of an audio systems.

SweepSine(Time) = Amplitude×sin(2π×Frequency(Time)×Time+InitialPhase)

Its formula is distinguished from simple sine by constant Frequency and Frequency(Time).

As rule, Frequency(Time) alters with time linearly or logariphmically.

 

White noise

White noise in noise, that evenly distributed in infinite frequency range (have same magnitudes for all frequencies).

May be used for studioes of frequency response and dynamic abilities of an audio systems.

 

Single Pulse (Dirac Delta Function)

In theory, it is pulse with infinite high level and infinite short length. In the digital audio, it's pulse with 0 dBFS level and 1 sample length.

This signal have infinite flat spectrum.

To dynamic analyzis, "train" (sequence) of these pulses may be used.

 

Square waveform

It's used to analyzis like single pulse train.

 

Musical signal

Musical pieces is not most suitable test signal for distortion measurement. Musical signal is complex, contains many components.

After distortions into an audio system, these components produce many products at the system output. It can cause our inability to distinguish test signal and distortions.

EXAMPLE:

We have some mathematical abilities to decouple input signal and distortions D(t).

As example, we can subtract input signal (In(t)) from output one (Out(t)). To exact subtraction, we should to know Gain of tested audio system:

D(t) = Out(t) - In(t)×Gain

However, Gain is not simple number. Gain may depend on frequency, level, etc. See more...

So, using of musical signal is too complex. We can get desired information via easier test signals.


 

Audio system features

Frequency response (magnitude)

Here magnitude is sine-amlitude module (in math meaninhg: amplitude without sign).

Sine oscillation is: Sine(Time) = Amplitude×sin(2π×Frequency×Time+InitialPhase)

Magnitude = |Amplitude|

Example: Magnitude = |1| = |-1| = 1

This feature show an audio unit gain at every frequency (in a frequency range).

 

Frequency response (phase)

Sine oscillation is: Sine(Time) = Amplitude×sin(2π×Frequency×Time+InitialPhase)

We can write is as: Sine(Time) = Amplitude×sin(Phase)

At a frequency, if we have at audio-unit input: Sine(Time) = Amplitude×sin(PhaseIn),

at output we have: Sine(Time) = Amplitude×sin(PhaseOut).

The response show [PhaseOut-PhaseIn] at each of frequencies (in a frequency range).

 

Amplitude response (input/output magnitude ratio)

It's core feature when we discuss linearity of an audio unit.

Actually, it describe the unit gain.

Ideal linear device's gain:

[Output level] = [Input level]×Gain

where Gain is constant.

At the picture we can see flat line. It is constant gain of a linear audio unit.

 

 

Non-linear unit's gain is described:

[Output level] = [Input level]×Gain([Input level])

Gain([Input level]) means: Gain is depending on [Input level].

 

Linear gain don't cause any distortions. It only can alter amplitude of audio signal at output.

Read below about non-linear distortions...

 

Distortion types

Frequency distortions (magnitude)

An electronical devices or an audio software pass audio from input to output. We want to assess the device/software frequency distortions or distortions of the frequency response at the output.

Example 1:

1. we sent 1 kHz tone at input with amplitude 0 dB (zero decibel). At the output, we'll get -1 dB (minus one decibel).

2. we sent 2 kHz tone at input with amplitude 0 dB. At the output, we'll get -1 dB (minus one decibel).

For both tones, we have the same sound intensity (level in dB).  Here is no magnitude distortions of frequency response.
 

Example 2:

1. we sent 1 kHz tone at input with amplitude 0 dB (zero decibel). At the output, we'll get -1 dB (minus one decibel).

2. we sent 2 kHz tone at input with amplitude 0 dB. At the output, we'll get -3 dB (minus one decibel).

We have the different sound intensity for 2 tones.  It is magnitude distortions.

 

Frequency distortions (phase)

In bound with the magitude distortions, that are described above, we should consider phase distortions. An audio devices or software passe audio from input to output. We can estimate the phase distortions over frequency range.

Phase is time delay between input and output. We can convert the time to degrees for input sine signal. More degrees more delay.

But, let's consider time in the example below.

Example 1:

1. we sent 1 kHz tone at input with amplitude 0 dB (zero decibel). At the output, we'll get 0 dB too and time delay 100 milliseconds (ms).

2. we sent 2 kHz tone at input with amplitude 0 dB (zero decibel). At the output, we'll get 0 dB too and time delay 100 ms.

3. we sent 2 kHz tone at input with amplitude 0 dB (zero decibel). At the output, we'll get 0 dB too and time delay 100 ms.

For all three tones, we have the same time delay.  Here is no phase distortions over different frequencies.

 

 

Example 2:

1. we sent 1 kHz tone at input with amplitude 0 dB (zero decibel). At the output, we'll get 0 dB too and time delay 100 milliseconds (ms).

2. we sent 2 kHz tone at input with amplitude 0 dB (zero decibel). At the output, we'll get 0 dB too and time delay 110 ms.

3. we sent 2 kHz tone at input with amplitude 0 dB (zero decibel). At the output, we'll get 0 dB too and time delay 90 ms.

Different time delays over different frequencies mean phase distortions.

 

Non-linear distortions for 1 sine

When 1 sine (simple tone) comes to an audio unit's input, at the output we get the sine and several smaller sines. These smaller sines are called "non-linear distortions".

They happens due to non linerarity of input/output signal-level response of the audio unit.

In other words, if input signal changes gradually with the same ratio, output signal may changes faster or slower. It depends on input level value.

Non-linear distortions for several sines (intermodulations)

When 2 tones (sines) with frequencies F1 and F2 comes to an input of audio device, at the output rises:

these 2 sines with F1 and F2 frequencies,

their harmonics: F1*2, F1*3,...,F1*N, F2*2, F2*3,...,F2*N,

and intermodulations: F1*N + F2*M, and F1*N - F2*M.

In instance: we have 2 ultrasound frequecies: 24 kHz and 26 kHz. Due to intermodulation, they burn audible product: 1 kHz = 26 kHz - 24 kHz.

Ringing audio

Read in details about ringing audio...

 

Subjective

Reasons of the subjectivity

Subjective personal perception it's "I like" degree.

As rule, the perception is defined by:

  • skill;
  • health;
  • mood;
  • listening conditions;
  • etc.

It is pretty hard to normalize results to standard universal frame. Because, we can't measure feelings inside other people. And even our inner sense, may be different for same objective phenomenon.

EXAMPLE:

If we see to single visual object, we see it differently.

A painter might see multiple semitones of color at the object surface.

A person with fine eye health or natural abilities can see a tiny letters at the surface.

If we alter light, we get other impressions.

Etc.

So, when we see at single object, we are biased according to our personal factors and conditions.

 

How to measure the subjectivity

Currently, only statistical data accumulation and analyzis can help to estimate subjective things in numbers.

How to do it correctly read here...

 

How measurements are bound with subjective perceptions

Why we have difference between objective and subjective sound quality?

Bounding of measurements to subjective perceptions is field of psychoacoustics.

This bounding may be achieved via statistical data accumulation and processing only. The bounding is averaged, as rule. Because, unlike device measurements, human perception (especially for different persons) is more variable due to nature, inner and outer conditions.

Also, adding of some distortions can "improve" perceived sound quality.

"Sound improvement" may happens in analog equipment, that is so imperfect for author as engineer, comparing with digital systems, that just have no distortion sources, that are meanful in analog systems.

Read details...

Let's consider pure sound quality.

Let's we have an acoustic source. Its signal pass through an audio system and coming to our ears.

 

 

In ideal our ears should get same acoustic wave, that we receive about the acoustic source.

And the more additions from this audio system the worse sound quality.

 

Analog system, that contains analog recording mediums, cause too many distortions, comparing digital audio system.

Remark: digital and analog systems have common parts (amplifiers, speakers). We consider system, that have same common parts.

But no one person says, that vinyl or tape sound is better, than digital with lesser distortions.

 

 

 

Sound engineers know some almost invisible tirck, how to "improve" sound quality:

  • louness boosting (1-2 dB) general or into a frequency band;
  • add easiest compression;
  • lightest reverberation;
  • some non-linear distortions (tube, tape emulation);
  • etc.

Also, there is possible to add noise/pops of LP.

It may make sound more expressive. Even, when we hear the noise.

If we switch back to real sound, it will be boring.

So mechanical mediums are capable to bring an emotions via specific distortions.

But if you prefer "lesser bright" original sound, you prefer digital high resolution audio.

Author's personal opinion:

Actually hi-fi equipment play sound good almost in any case. But if you have habbit to listen more natural (closer to original) sound, very probably, you prefer qualitative digital audio.

However, it is need to remember, that all distortions and features for both analog and digital audio was discussed only in single demention: how to transmit sound from microphone to ears.

But, next step and demension of audio equipment development is capturing and reproducing of spatial wave spreading in concert hall. Read details...

 

 

Conclusions

1. Unmeasurable, at first sight, concept like "sound quality" may be estimated two ways:

  • distortion level, raw and accounting of the psychoacoustics;
  • statistically averaged subjective opinions.

2. We can get better subjective comparison results, when objective ones are worse. It happens due to "colorization" of sound, that caused by some distortions.
However, the "colorization" is not natuaral sound. So, "unnatural" sound may be preffered "natural" one.

Thus, audio quality is located between objective numbers and subjective senses.

3. We can not, that live concert and its recording sounds differently.
It's so, because sound quality of audio apparatus have 2 aspects:

  • record distortions and
  • spatial distortions.

The record distortions are almost solved now. Especially, for digital technologies.

Spatial distortions have unsolved issues of capturing and playback. And they may be partitially solved via headphones and especial double-microphone recording (binaural recording).