Simon Budig wrote:
Sorry, but I don't believe that this destinction would make sense.
From my point of view "transparency/opacity" and "coverage" are two
models to explain what happens when talking about alpha. I do know that
the original Porter Duff paper based its conclusions on the coverage
model, however, the transparency analogy comes closer to what happens
when gimp is building its projection of the image.
The distintion is only important when deciding what to do when color
information goes to zero. Coverage says it goes away, transparency says
it stays. Also, alpha is the model. Transparency and coverage are the
real (as in reality) things. Though I suppose that depends on whether
you feel art imitates life, or that life imitates art (and I am not
poking fun here, it's an iteresting philosophical debate).
For "proper" coverage handling you'd have to store the information
*what* part of the pixel has been covered. Better leave that to a vector
based implementation. The coverage model also fails to model a flat area
of e.g. 50% opacity (a surface with a small hole in each pixel...).
Yes indeed. Alpha as a measure of coverage is an approximation. The
core blending ops derive directly as an extension of this approximation.
Since alpha doesn't declare how a pixel is covered, when two pixels
overlap you can describe how they overlap in one of 5 ways, listed in a
chart on page 838 of "Computer Graphics" (2nd ed, Foley, et. al.). But
as I said above, I think the difference is only vital when you have to
decide what happens at zero.
This would mean that
instead of an alpha channel and a layer mask, we should have a coverage
channel and a transparency channel. (giving way to RGBCT colorspaces).
In this sort of situation, the full measurement of the pixel includes
all five numbers, and any algoritm that affect pixels would have to take
into account all five numbers (just as any operation now must account
for all four exsisting pixel measurement numbers). Indcidenally, alpha,
in the way it as been used would be C*T.
I fail to see what would be the win in this situation. All algorithms
would have to be revised and I really doubt that this separation would
make the algorithms simpler. E.g. Blur: It is ok, to blur the opacity
channel, but blurring the coverage channel does not make sense, because
it does not fit in the model of partially covered pixels. What should we
do? And how would we present unexpected results to the user?
It is only a small change to the algorithms (if anyone wants I can work
out what I think are reasonable models, do the math and stick 'em on the
list, I have already done some of it anyway). And I would think that
blur would apply to a partial pixel and ignore opacity (depending on
just how you modeled opacity) The impluse to the blur would be smaller,
as discussed earlier. Including the alpha is the correct way to blur.
The win in this situation would be greater flexiblity in deciding when
it is appropriate to discard color information, also a more complex
model would allow for more complex results. AFAIK, this seperation
between coverage and transparency has never been modeled before in a
real application, so I cannot provide any research or data about how
useful it would be. I can only go with what I have mangaged to work out
myself, and my gut feeling. My gut feeling tells me this might be useful.
And where would be the win for the added memory requirements, more
complicated algorithms and users looking blankly at the screen and
trying to figure out what is going on?
User's will figure it out. Since no one has ever tried to work with
this before, it is hard to say what uses people will come up with. (I
mean really, what would anyone use a laser for?) Besides, I am
suggesting that if they don't want to work in RGBCT they can always work
in RGBA. The added memory requirements give way to more complex results.
That said I could see some use for additional channels in the image.
Normal-Vectors or glossiness are an exciting idea, especially when using
them for generating textures. It also would be cool to have spot-color
channels in the image so e.g. distort plugins would distort the
image+spot color information together and you don't have to apply the
same plugin multiple times in the very same manner on multiple
drawables. It would be nice if these things were possible.
Agreed. I will try to see about incorporating these extra channels into
gegl (not necessarlly C and T from above, but the others certainly).
--
Dan