8-bit to 16-bit precision change errors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I noticed an error in the resulting values when changing image
precision from 8-bit integer to 16-bit integer.

For example, in a ten-block 8-bit RGB test image, the block with the
linear gamma sRGB 8-bit values of (1,1,1) should be (257,257,257) upon
changing the precision to 16-bit integer. But instead, the Gimp 16-bit
integer values are (258,258,258). Similar errors occur with 8-bit
values of (2,2,2), (4,4,4), (8,8,8), . . (32,32,32). (64,64,64) and up
is accurate.

Gimp 32-bit floating point and 32-bit integer values are error-free
upon changing the precision from 8-bits and then exporting a 16-bit
png (so errors might be hidden by the reduction in bit-depth upon
export). The (128,128,128) block is off by 3 when changing the
precision to 16-bit floating point.

If anyone is interested, the test image and a spreadsheet with the
correct values and formulas can be found here:

http://ninedegreesbelow.com/temp/gimp-lcms-4.html#precision

"Round-tripping" back to 8-bit values gets you back where you started.
But that is an artifact of collapsing 256 "steps" in the 16-bit image
back to 8 bits. The values could by off by as much as 127 in either
direction in the 16-bit image, and still "collapse" back to the
correct 8-bit value.

Elle
-- 
http://ninedegreesbelow.com
Articles and tutorials on open source digital imaging and photography
_______________________________________________
gimp-developer-list mailing list
gimp-developer-list@xxxxxxxxx
https://mail.gnome.org/mailman/listinfo/gimp-developer-list


[Index of Archives]     [Video For Linux]     [Photo]     [Yosemite News]     [gtk]     [GIMP for Windows]     [KDE]     [GEGL]     [Gimp's Home]     [Gimp on GUI]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux