Changing of bit-depth 32-bit float or 24-bit integer samples to 16-bit integer cause some harmonic distrortions.
Pure source signal (44 kHz, 32-bit float or 24-bit integer, sine wave 4410 or 4430 Hz):
This signal saved to 16-bit integer format:
Sine wave 4410 Hz hasn't noise, but it have 3-th harmonic with significant level (-114 dB).
Sine wave 4430 Hz has noise with level better -133 dB.
This is result of convertion 32-bit float to 16-bit integer of sine waves 4000 and 4030 Hz with sample rate 192 kHz:
Here we see input (without conversion or other processing) signal with frequency 4010 Hz in 16-bit / 44100 Hz:
Appear 3-rd harmonic with level -118 dB.
Here we see input signal with frequency 4030 Hz in 16-bit / 44100 Hz:
No separate harmonics. Appear noise like to white noise.
For 32-bit float to 16-bit integer conversion:
1) If frequency of sine wave is multiple of sampling rate, harmonic distortion appear as pure sine waves with noise same noise of HD formats (very low).
2) Otherwise non-linear distortions uniform allocate (same white noise) all frequencies with low level.