Re: [Gimp-developer] Re: GIMP preview widget (was tentative 2.2 feature list)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 19 Feb 2004 16:45:45 +0100
Dave Neary <dneary@xxxxxxx> wrote:

> Hi,
> 
> Ernst Lippe wrote:
> > Dave Neary <dneary@xxxxxxx> wrote:
> >>1) if the original drawable is zoomed out, do a messy scale of the original to 
> >>give the new drawable (accuracy isn't *hugely* important in general in this 
> >>case), then take the tiles containing the part of that drawable that will be 
> >>visible in the viewport, and generate a drawable from them, and apply the 
> >>transform to that.
> >>
> >>2) If the original drawable is at 100% or greater, then calculate the pixels 
> >>that will be in the viewport, take the tiles containing that bounding box, 
> >>generate a drawable from them, apply the filter, then expand the result as 
> >>necessary to bring it up to the scale ratio.
> 
> >>How does that sound?
> > 
> > First of all you need a larger input area otherwise the image near the
> > edges of the preview are incorrect.
> 
> That's a minor implementation issue - we can take the drawable used to 
> generate the preview to be the viewport +/- some arbitrary amount of 
> pixels, or perhaps take 1 tile more than we need in the horizontal and 
> vertical direction.

For the current preview it is even a non-issue, it only becomes
relevant when you expect that the preview should give you an already
scaled image. Even when the preview should generate a scaled image, I
think that you should think very carefully about the margins. I don't
like the idea of having fixed margins because then you are taking a
design decision in a place where it does not belong, it obviously
belongs in the plug-in and not in the preview. How do you handle the
case where part of the margins fall outside the drawable? The "normal"
solution would to supply zeroes for these areas, but there are several
plug-in algorithms that are convolutions and they usually have a nasty
behaviour when there are too many zero's around.  I think that in any
case the preview should always give the absolute image coordinates of
the area that must be rendered to the plug-in, there are several
plug-in's that need this information (most "warping" plug-in's need
it).  Wouldn't it be confusing to the implementor when the area that
they are supposed to render is different from the input area?


> > part of the preview is "close to the edge". But the actual size of the
> > margin depends on the blur radius, so when you want the preview to
> > provide the scaled data, there should also be some mechanism to tell
> > the preview how large this extra margin should be.
> 
> This assumes that the preview should be precise. One of the merits of 
> the preview, though, is that it is an impression of the effect and 
> renders quickly - quick and dirty should be OK. Of course, there's a 
> compromise to be made in there. But I don't think plug-in previews need 
> to be 100% exact.

This is a decision for the plug-in maker, but I believe
that the preview should be as accurate as possible.
It is probably a bias from my background, my main plug-in
does some pretty slow computations, and therefore badly
needs a preview. I really hate it, when I discover, after
a long period of waiting, that I chose the wrong parameters
because of a defect in the preview process. 

In some cases it may be a valid decision, I am just arguing
that it should not be the "default" decision that can be taken
without further analysis, because the implicit assumption
"users will never see the difference" is in general wrong

> > Yes, but that is something that the plug-in algorithm should do,
> > because it is the only place where you can determine what inputs are
> > needed to generate a specific output area.  Think for example of some
> > whirl plug-in, to compute a given output area it will only need a
> > subpart of the original image, but it can be very difficult to
> > determine what part is really needed. So it is the responsibility of
> > the plug-in algorithm to compute only a specific output area.
> 
> Good point. But shouldn't the preview widget cater to the most common 
> case, while allowing the plug-in to address the less common case? I 
> would prefer to see all convolution based plug-ins (that are essentially 
> local) and render plug-ins (where the result is entirely predefined by a 
> seed) to have a nice easy way of generating a preview that consisted of 
> more or less 1 or 2 function calls, and have a more complicated API to 
> allow things like whorl and the like to calculate their effects using a 
> preview widget, with some more work.
Yes, but the most general solution is simply to let the plug-in
work on unscaled data and leave the scaling to the preview.
This works for all plug-ins. When you look at it this way
a plug-in algorithm that is scale-independent is only a
special case.

Also you are assuming here that convolutions are scale-independent,
this would only be true if we were dealing with continuous images,
convolutions are in general not scale-independent when you deal with
images that consist of discrete pixels. This may not be very
important when you are only considering "blurring convolutions"
but there are also several convolutions that are "sharpening"
convolutions and for these convolutions there is a big difference.
Also in many cases it is not at all obvious how you should scale
the convolution matrix (e.g. the sharpen plug-in) or it
could be quite expensive to compute the new convolution
matrix (e.g. my own refocus plug-in). 

> 
> > Anyhow, a good plug-in should already have the functionality to
> > compute the outputs for a given area (gimp_drawable_mask_bounds), so
> > it should not be too difficult to modify this to use the area that was
> > determined by the preview.
> 
> It would be nice to move some of this common code into the preview 
> widget itself, so that the common case doesn't have to worry about it.

Honestly, I don't see any advantage with respect to code
complexity. What you are proposing will introduce unnecessary
dependencies between the preview and the plug-in.  When the plug-in
requirements are simple, my solution is also simple in most cases it
will only require a single function call to obtain the scaled
area. When the plug-in requirements are not simple, it is not at all
certain that the plug-in developer will be satisfied with any standard
solution. So letting the preview generate a scaled image will only
increase the complexity of the whole system because you will create
new dependencies between the components.

When I look at any design decision, there are two major aspects:
correctness and simplicity. In most cases these are clearly related,
it is in general pretty difficult to show that a decision is correct
when it is not simple.  Feeding scaled data to a plug-in at first
sight appears to fail at both points, in general the outputs will be
less correct than simply scaling the unscaled outputs, and it is more
complex because in general you will have to adapt the plug-in
algorithm to a specific scale. 

So when you look at the whole idea, the whole reason behind it is that
you believe that it is faster. Now, in virtually all cases,
performance is far less important than correctness and simplicity. I
have seen a lot of systems over the years, and it is pretty rare that
you encounter a case in practice where performance is really a serious
issue, most problems are related to correctness (bugs) and simplicity
(maintenance). Most optimizations only improve performance with
some constant factor, and in most cases you can achieve the same
performance increase by buying a new computer. There is no
equally simple solution to problems with correctness and simplicity.

Now when you look at the performance argument for feeding
scaled data, it only becomes relevant when the preview is
zoomed out. Actually, I have serious doubts whether that
should happen often. For most plug-ins users are main
interested in small size reductions, e.g. it is very
difficult to judge the effects of a plug-in when the
preview shows an reduced version of the entire image.
I would expect that in most cases users are most interested
to see the result at 100% for the most relevant parts of
their image. 

The only category of plug-ins that I can see for which
it would be useful to see a scaled version of the entire
image are the warping/whirling plug-ins that perform
non-local transformations on the image, but unfortunately
for these transformations using scaled inputs can give
highly misleading results. In most cases these algorithms
will magnify certain parts of the image. Now think of 
what will happen when the plug-in magnifies a certain
area with a certain factor and you view the output
reduced with the same factor. When you use unscaled
inputs to the algorithm, you will see outputs that
are highly similar to the original input area, but when
you supply the plug-in with scaled data you will mainly
see the artifacts of the scaling algorithm. 

The bottom-line is that I really think that the
default decision should really be to use unscaled data
and that you should only use scaled data after a carefull
analysis.

Ernst Lippe

[Index of Archives]     [Video For Linux]     [Photo]     [Yosemite News]     [gtk]     [GIMP for Windows]     [KDE]     [GEGL]     [Gimp's Home]     [Gimp on GUI]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux