* Daniel Rogers <daniel@xxxxxxxxxxxxxxxxx> [040115 20:58]: There is no reason that an op has to be associated with a particular > In fact, multiple users may want different ROI's computed from the > op simultanously! Clearly, the ROI should be assocated with the > resultant image. > GeglImage* gegl_graph_get_image(roi_rect, other_rendering_parameters > (antialising hints, etc)). > > And a GeglImage object is immediately returned. This GeglImage object > doesn't actually contain tiles, it just behaves like a BufferImage, and > retrieve tiles from the underlying op as soon as you request a tiles using: > > gegl_image_get_tile(image,tileX,tileY,hints) > -Sytactic equivlence (Differences between ops and images) > First of, this method means there is no sytactic difference between > files, network sources, and in-memory tiles. It makes the node > construction api simplier. > > Further, an op can (though it doesn't have to) do away with things like > bounds. And image, when connected to an op, has bounds, a specific > context in which it was rendered (say, with antilaising, a resampling > algorithm, or with a particular resolution), to which the op should know > nothing about. > > -Ease of passing hints > There is a well defined way of passing hints to the rendering system. > The hints are able to be passed exactly when they are needed (and thus, > when they are known). > > -multiple simultanous renderings > Using this method, a user could request several images from the graph > (essentially queueing up multiple renderings) and the rendering system > can take care of minimizing the amount of recaculations needed. This is > especially useful, because it is trivial to extend this api to support > resolution independence. (simply allow one to specify the rendering > context to the get_image function). > > because a layer is not always a collection of pixels. Sometimes it is a > text layer, or a vector layer, or an adjustment layer, or an effect > layer. Ignore that thing about gegl-buffered-image. It was confusing > and not what I meant. Effect and adjustment layers don't actually belong in this category of "faucet" or provider nodes, nodes that somehow conjure up the information about what pixels are contained without having image data as inputs to themselves,. all the pixel values are decided by the parametes, (filename, coordinates, color etc.) - v4l - movie frame provider - vector layer (svg?) - text layer - image provider - plasma - noise - in memory buffer of pixel data (which IMO should be read/written in a simliar manner to GeglImage, to keep the API as simple and consistent as possible. > >There certainly is a need for there to be a certain kind of > >image that is capable of participating in operator caches, yes, > >but wont you want to have many different ops writing to a > >drawable? > > Right, and the best way to handle that is with an op that can produce > multiple, writable images, or that has procedural drawing calls. > Requsting an image from an op is a way of specifing that you are > interested in manipulating that region. The op can handle all the magic > of locking minimal regions. The example I think we should think about when discussing this is something as essential to gimp as the paintbrush,. ideally in a totally GEGL'ified world, how does the paintbrush work? Here is what I think: Global compositing graph of a image in gimp, containing just one layer, I've thrown in a thumbnail for the layer to be rendered as well for good measure,. (and yes I think a processing graph should be allowed to have more than one output,. since images can be reused, this is a very natural extension of the API, allowing, when used for video to simultanously display a image in a window on screen, and encoding a mpeg4 stream to broadcast on the net, or save to file) thumbnail \ scale( 0.05) \ display \ \_____\ \ mem_buf( GeglImage *buf) When a stroke is started, I envision the graph to temporarily be changed into: thumbnail \ scale (0.05) \ display \ \_____\ \ stroke( M 20 20, L 20 40) \ mem_buf( GeglImage *buf) essentially a rendering op, where the Lineto coordinates grow with interaction, and that part of the op is redrawn continusly as the user moves the mouse. The resulting image output from stroke after the user releases the mouse, should replace the buf contained inside the mem_buf node. An alternative is to accumulate all the calls, and trust the caching to work out in such a way that we won't run out of memory,. but at some point merging the strokes into pixel values is needed,.. /Øyvind K. -- .^. /V\ Øyvind Kolås, Gjøvik University College, Norway /(_)\ <oeyvindk@xxxxxx>,<pippin@xxxxxxxxxxxxxxxxxxxxx> ^ ^