Am 30.08.2012 13:55, schrieb Elle Stone:
So if I understand what you are saying (I don't think I do):
First the lcms plugin converts the image to the actual monitor display profile.
Then "something" converts the image to sRGB and sends the image to Cairo?
And then Cairo sends the image to the screen?
If I'm not entirely mistaken then there should be only one real
conversions. In this lcms would convert the linear color to a pseudo
sRGB, which is actually the monitor display profile. This is because
Cairo only supports sRGB and does no conversion to the monitor profile
on it's own. So the colors have to be converted before passing them them
to Cairo (regardless if sRGB or RGB30).
Bit depth and ICC profile color gamut are two different things. Bit
depth determines how many steps to get from min to max. For example,
8-bits gives you 255 steps to get from solid green (0,255,0) to solid
yellow (255,255,0). 10 bits gives you 1023 steps to cross the same
distance. But the "meaning" of solid green and solid yellow is
determined by where the monitor profile (or any other ICC profile)
locates solid green and solid yellow in an reference space (profile
connection space) such as XYZ or Lab space.
Kind regards,
Elle
Right. I do not really understand why the conversions should be such
painfull. Ideally all image data (except alpha) should be handled as
linear internally. This has effectively nothing to do with color
management (only depth conversion), except that we want to give every
channel/layer a own profile to avoid rounding errors on low bit depth.
Rounding errors would occur if we have a 8bit sRGB image and would
convert it to 8bit linear RGB. So we have a dilemma:
A) Storing everything as 32bit float linear RGB would dramatically
decrease programming overhead and computation time (no color conversion,
except for final output), but it would consume a great amount of RAM for
just 8 or 16 bit images.
B) Leaving the the values as they are and doing conversions every time a
pixel is accessed saves a lot of RAM. The downside is that every pixel
has to be converted from "channel" profile and depth to another profile
and depth if doing some stuff. This could be drastically speed up if
there are specialized methods that can do the operation the short way,
but i doubt that it would be beneficial to implement all permutations.
Personally i would favor scheme A) since performance is one of my
biggest concerns for GIMP right now. RAM is important, but it doesn't
really matter as much. This steady conversions from one color space to
another are really a performance killer.
Option C) would be a cache for Option B) that keeps the image data as it
is (uncoverted, original bit depth), but stores the pixel information as
linear RGB 32 bit. But it would not store the whole layer, it would just
store what is "On Screen", already resized, transformed, etc, but not
flattened. That way only one layer must be converted (cache->layer)
while drawing.
Kind regards,
Tobias Oelgarte
_______________________________________________
gimp-developer-list mailing list
gimp-developer-list@xxxxxxxxx
https://mail.gnome.org/mailman/listinfo/gimp-developer-list