Hi
I'm teaching a course in electronic music, and one of the subjects I'd
like to cover is what compression does to the music. I googled a bit abd
found this:
http://www.youtube.com/watch?v=u5gdwpPrv_8
Basically it's a matter of loading the original + the mp3 encoded
version of the same track, inverting the phase in one of the two clips
and listening to the artifacts.
I did some tests, and the results are scary. Lame (128 kbps)and oggenc
(q=3) are different but both horrible, the artifacts are very ugly and
distorted and are as loud as -19 dB!. My favorite mp3 encoder, gogo, has
another strange result: The artifacts sound almost like the mp3, which
should mean it changes the audio much more! However I don't really hear
that much difference between lame and gogo encoded files...
This got me thinking if this is even a realistic test. Assume for
instance the encoder introduces a simple, constant delay in the encoded
audio. This will result in a lot of sound slipping through the
invert-the-phase-of-one-of-the-signals test. Although it could be said
it alters the audio dramatically, when aligning the files and comparing
them sample for sample, it has no impact on the perceived quality of the
encoded audio. I didn't fiddle with delaying the gogo encoded file, though.
My question is: is this really a fair way to judge the artifacts
introduced by encoding?
--
Atte
http://atte.dk http://modlys.dk
_______________________________________________
Linux-audio-user mailing list
Linux-audio-user@xxxxxxxxxxxxxxxxxxxx
http://lists.linuxaudio.org/listinfo/linux-audio-user