On 14/01/04, Daniel Rogers wrote: > > Yes, but there is no reason not to have an image type that one doesn't > need a node to use. In which case, node-result images and in memory > images have very different requirements. Certainly. You dont need a node to use an image. That is not what I am saying. I think I just dont understand your notion of "image". Let me show you what I think they are with some specific examples. Then maybe you can say if the way you are thinking is different Ill just show how you might set up a projection stack for gimp using images and ops and gobject properties. My viewpoint goes like this: combine3 / \ L3 combine2 / \ L2 combine1 / \ L1 P Each combine is an GeglOp, everything else is "data" (not a GeglOp). Other "data" besides the above image data (eg mask image data) is not shown but each combine op will have other kinds of data that feeds into it as well (opacities, modes, or whatever) So combine ops will have two "image" input properties: These are two regular gobject style properties called "source1" and "source2". (Think source2 over source1 here, so source2 is foreground and source1 is background) Also each combine op has one "image" output property. Again this is also a regular gobject style property and is called "dest". Now output properties are read-only gobject properties, and input properties are read-write gobject properties. The reason for this is because output properties are only to be set by the corresponding op as a result of the "evaluate" method. However inputs are obviously settable properties of the op objects (and settable by g_object_sets) Now you set up your ops like this: GeglNode *combine1 = g_object_new(GEGL_TYPE_COMBINE_OP, "source1", P, "source2", L1, NULL); GeglNode *combine2 = g_object_new(GEGL_TYPE_COMBINE_OP, "source2", L2, NULL); GeglNode *combine3 = g_object_new(GEGL_TYPE_COMBINE_OP, "source2", L3, NULL); So far all the "source2" properties are hooked up. Now hook up the output properties for each node to the appropriate input properties of the combines like this: gegl_node_connect(combine2, "source1", combine1, "dest"); gegl_node_connect(combine3, "source1", combine2, "dest"); This tells the combine2 and combine3 ops that when they do their "evaluate" method, they should get the input property called "source1" from the output property of the corresponding node they are connected to along the source1 line (in this case they are connected to the "dest" output property of another combine op) Now you call gegl_node_apply(combine3, "dest"); and pass it the name of the output property you want it to compute (You should also set a ROI on this output property as well before doing this, and that is left out here too) Now your result is sitting in the op's "dest" output property when you are done. In this case the output property "dest" just happens to be a GeglImage object like we have said before. And we have specified a particular area for the image we want by setting that need_rect on that property before we make the call to "apply" So in this view images are just another kind of data moving through the graphs properties. Now we certainly arrange for the set of combine ops to either operate in-place (so each dest is actually just the projection GeglImage P passed at the beginning) or you can have each op create dest images as it goes along the way and have those results cached, so that if a layer changes, then the input property that corresponds to that data is written as dirty and this propagates up the graph to the root, and all the properties above that one are set dirty. Anyway underneath the ops are asking images for their tiles so and reading and writing to them, and placing pieces of the images in caches and listening for changes to that same data so that they can mark properties dirty if they change in some way. Is this very different from the way you are thinking? > > No, I think I meant what I said. GimpImage is a stack of GimpLayers, > conceptually, and a GimpLayer is a combition of a GimpDrawable and some > associated blending info. For all the reasons above, nothing should > keep around a GeglImage unless it is actually processing that region (as > this locks up some resources). Thus, GimpImage maps to a tree of ops. > GimpLayer maps to a blend op and a leaf op. A GimpDrawable maps to a > leaf node, which probably just contains a bunch of tiles (a la' a > gegl-buffered-image, but with all the op stuff connected to it). I dont see why there is a need for leaf nodes. And how is the drawable the output of a painthit if it is a gegl-buffered-image, unless you additionally copy to and from it to get your paint hit on the drawable. There certainly is a need for there to be a certain kind of image that is capable of participating in operator caches, yes, but wont you want to have many different ops writing to a drawable? > | I think most of the work in bringing in a new image type is > | related to replacing TileManagers and PixelRegions code all over > | gimp to use GeglImages and GeglOps instead. > > Why do you think that? I just mean the the typical place where an op in gimp is set up looks like this usually: PixelRegion src1PR; PixelRegion src2PR; PixelRegion destPR; pixel_region_init(&src1PR, tile_manager1, x,y,w,h); pixel_region_init(&src2PR, tile_manager2, x,y,w,h); pixel_region_init(&destPR, dest_tile_manager, x,y,w,h); blah_region(&src1PR, &src2PR, &destPR ...); Then you have the result in dest tile manager when you are done. All these places in the code have to be changed to set up an op and then apply the op instead of what they do now. Calvin