Re: jpg

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



fyi if you want to see the difference between your camera jpg and say a 16bit tiff from raw paste the jpg into your raw conversion file in Photoshop and set the blend mode to difference.   pure black = no difference.  

Randy S. Little
http://www.rslittle.com/
http://www.imdb.com/name/nm2325729/




On Tue, Aug 27, 2013 at 10:13 AM, karl shah-jenner <shahjen@xxxxxxxxxxxx> wrote:
Emily wrote:

My understanding is that jpg compression achieves smaller filesize by discarding data.  I've never heard of exceptions to that at any level of jpegging.


this is true but it's not as drastic as it sounds - think of it this way, a jpeg is actually a bitmap in a special compression format not that different from say a zip.  A zip however is quite constrained in what it can and cannot discard.. and essentially it contains every scrap of data in the original file, it just has it's own algorithm it uses to repack less efficient containers.  A .doc may well be instantly recognisable to a word processing package, but the way it was packed often contains redundant, repetitious junk that zip files handle better in storing the data for reduced size.

Imagine you have a bitmap that's thousands of pixels square and is totally white - or any other single colour.  A bitmap file in .bmp format will store the colour data of each and every pixel as well as it's location - ie, it's store the data as I AM A BITMAP : 255,255,255 x=1,y=1 255,255,255 x=2, y=1 ... etc for the total number of pixels- say 9000 pixels  (this is not precisely how it's stored but it's a good way to explain it) .. a jpeg will not, it will store the information as I AM A JPG : 255,255,255 x=x, y=y (x=1-3000,y=1-3000).. and that will be the sum total stored by comparison, just that tiny formulae explaining all the redundancies.

So yes, it's discarding and compressing information but at full quality not one iota of actual data is lost, it's just not as pedantic about storing redundant information.  Then there are the levels or quality stored where the operator decides whether all that information is actually needed.  I may decide that 255 shades of cyan in the sky is excessive and at 70% quality (compression) with a certain number of tones incorporated together,  my eye can see no loss of information and so yes that extra data is discarded.  Go too far and you see the dreaded banding - bad.

Of course when it comes to manipulating jpegs there's the other story that all actions on jpegs are lossy.. this is *generally*  true but it's also utterly wrong.  When a jpeg is decompressed (into it's bitmap) it can be resaved as a bitmap, worked on as a bitmap and resaved as a jpeg without any specific losses - However people don't do that, they open them and start fiddling in photoshop.  Rotate it right, rotate it back, you just lost data. Invert it and switch it back again.. you lost or added more data.  Each change is making changes to the image resulting in losses - but that's not the fault or function of the jpeg, it's PHOTOSHOP doing that.  Big difference.  Photoshop is incapable of manipulating jpegs losslessly (maybe newer versions can, I doubt it as PS has always been a bit ancient and clunky under the hood)   But to blame the jpeg would be like blaming a Ferarri and calling it junk because you tried to drive it down a stony embankment.

Other programs such as Irfanview are capable of lossless jpeg operations and to that end, jpeg manipulations can be performed losslessly.



You can confirm that by examining a file at the pixel level, but why argue?  If you wish to retain all the data the camera collected, just don't jpg.  If you feel you don't need all the data, the issue is moot.

We do not really know what goes on inside the analogue/digital converters in cameras, nor do we know what algorithms are employed to record (or pad) image data so it's actually hard to say what trickery or honesty is taking place in recording our image.  We know for a fact that algorithms are employed to rectify (fiddle with and hide) chromatic aberrations in lenses.. we know and can prove algorithms are employed in pattern recognition to overcome the nyquit limit, removing jaggies and producting resolving powers that are actually beyond the actual physical capabilities of the lenses and sensore.. we know the effects of the Bayer arrays and antialiasing filters are corrected in-camera by yet more clever algorithms - who's to say how well the algorithms are performing at colour recognition?  I know cameras with the same sensors can produce quite different colour ranges from standard targets - this suggests not all algorithms are created equal.

But yes, Em, you're right - if jpeg doesn't work, use something else, if it does - more space on the card to shoot ;)



My pro shooter's opinion was that the publisher was going to be receiving a jpgged file from him anyway, and that magazine quality print wouldn't be able to reproduce the minutiae of uncompressed data anyway, so wotthehell.

same as the ol' E6 days - those deskjockeys like it simple ;)

I loved neg film for it's latitude but had to swallow my annoyance and shoot E6 when it was demanded.  I love jpeg for it's straight-from-the-camera printability.  but then I also still love shooting large format so eh.. I'm weird.


[Index of Archives] [Share Photos] [Epson Inkjet] [Scanner List] [Gimp Users] [Gimp for Windows]

  Powered by Linux