On Tue, 2006-10-17 at 09:27 +0200, Øyvind Kolås wrote: > On 10/17/06, geert.jordaens@xxxxxxxxxx <geert.jordaens@xxxxxxxxxx> wrote: > > >If you scale a image to 10% of the original size using cubic, you have > > >a situation where the data for each destination pixel is taken from a > > >region of 4x4pixel, whilst it should at least be taken from a region > > >of 10x10pixels, 84% of the image data is thrown away. > > > > OK, the handling of scaling down is not yet in the proposition > > > > Could we not just add the scale factor to the API ? > > The scale factor is not enough, if we scale it to 10% horizontally and > 70% vertically (or add some kind of rotation as well). A fixed scale > factor would no longer be correct. Doing a reverse transform of the > "corners" of the destination pixel would give us all the information > we need, and work for perspective transforms as well, hence the method > I suggested. I think you are mixing up interpolation and resampling. They are similar (abstractly anyway), but separate operations. Interpolation is designed to add points between two or more nearby points using linear or cubic interpolating curves. Translating by fractional coordinates uses interpolation. Rotation (without scaling) also uses interpolation. Resampling is designed to remove high frequency components while down sampling an image. Without resampling, when you downscale you get "jaggies" (aliasing) throughout the image. Examples of resampling are taking the average of the input area (box filter), gaussian weighted average, or applying a sync filter. Resampling is a convolution. These operations are best implemented seperately. First interpolate to get a set of pixels that can be input into a resampling function to complete your transformation. For example, suppose your x-scale is .1 (matrix X), your y scale is 2 (matrix Y) and your rotation angle is 45 degrees (matrix R). They are combined like so: YXR. First the rotation, do the rotation by 45 degrees by interpolating from nearby points. Then do the X scale by resampling 10 or more of the interpolated pixels (say, with a sync function, which is the best filter). Then complete the transformation by interpolating one extra Y point between every existing Y point using an interpolation function to complete your Y scale. Geert's interpolation API is sufficient. Interpolation rarely takes or needs more than 16 pixels. A rescaling API needs to be able to declare the size of the neighborhood around the pixel (an interpolation API could probably use this). They probably both need another object for extending the region at the edges (copy, reflect, zeros, wrap, etc). Both these operations are similar in that they need a neighborhood and produce a new value based on that neighborhood, but mathematicaly they are acomplishing two very different goals. Interpolation doesn't affect resolution and is used for shifting and scaling up. Resampling is for downsampling (aka decimating) a signal and is a type of anti-aliasing. They are used in very different cases, and should probably have different interfaces to reflect that. -- Daniel _______________________________________________ Gegl-developer mailing list Gegl-developer@xxxxxxxxxxxxxxxxxxxxxx https://lists.XCF.Berkeley.EDU/mailman/listinfo/gegl-developer