Introduction to GEGL's Architecture & GPU-support Architecture #2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,


It's me again.  Over the past weeks, I have been silently digging into
GEGL's source code, trying to understand how everything works.  This,
while researching about OpenGL and GObjects.  Though I am quite
familiar with C, I didn't know how GObject layers on top of C to
provide OOP features.  To make things worse, my previous experience
with OpenGL is very minimal.  When I say 'experience', I meant that I
only knew OpenGL through playing games.  And playing games doesn't
really help with programming, right?  :)  Let's just hope that I
resist the urge to play during the whole of GSoC's duration.  :p

Of course, I disclaim all views that I am acting like a GEGL expert
given that I am now talking about GEGL's architecture.  All that is
written in these articles reflect what I have read so far.  If you
have anything to correct, please do.  Even if I have all the time in
the world to read, I still am human.  So pardon the mistakes and keep
the criticisms constructive.  :)

>From the GEGL website[1]: GEGL is a graph-based image processing
framework.  Okay.  Now what does graph-based mean and how does it
relate to graphics manipulation?  I have explained in the previous
article that GEGL uses operations to transform pixel values.  In GEGL,
an operation is attached to a GEGL node.

Nodes are connected to other nodes through edges (represented by GEGL
pads[2]) that either come-in or come-out from the current node.  Pixel
data come in through the node's input pads and come out through the
same node's output pads.  Pixel data must first be processed by a
node's operation before it becomes available in the output pad.  And
pixel data must first be available from all the node's input pads
before the same node can manipulate the pixels.  A node's input pads
are connected to other nodes' output pads and vice versa.  All you
ever need is to imagine and you'll see that this forms a graph of
nodes that inter-depend on and from one another.  Perhaps, it should
help your imagination if you also know that there are source nodes and
sink nodes that do nothing but provide and accept pixel data,
respectively.  :p  Also, note that nodes may not be configured such
that a cycle is introduced.  This is why GEGL is also DAG (Directed
Acyclic Graph) based.

Each GEGL node makes use of GEGL buffers[3] to cache image data from
the previous nodes in the composition.  It is an instance of this
buffer that the node modifies with its operation[4].  The node
provides the relevant pixels to its operation by using a buffer
iterator that iterates over chunks of data within the input and output
buffers.

The second architecture is a scheme to let our GEGL nodes take care of
the image transfer from main memory to GPU memory (image to OpenGL
texture).  The operations will implement a new processor function[4]
that will expect texture parameters rather than pixel data arrays[5].
GEGL will decide to transfer the image to GPU memory and use the GPU
processor function depending on whether or not OpenGL is supported by
the current operating system GEGL is running.

Image transfer from main memory to GPU memory will be accomplished by
implementing a buffer iterator that will return textures instead of
pixel arrays.  These GPU buffer iterators will keep a pool of textures
that will be reused as textures are requested and committed to the
buffers.  As with the GPU processor function, a GPU buffer iterator
will only be used when OpenGL is actually supported by the current
operating system and that the current operation has GPU-support.

The advantage over the previous architecture is that GEGL is now able
to take full control over the OpenGL state machine[6].  We are now
able to initialize and destroy OpenGL contexts through gegl_init and
gegl_exit, respectively.  This will remove unnecessary code
repetition.  Moreover, OpenGL context initialization is CPU expensive.

Though we now have a pretty complex architecture, we still aren't
making full use of the GPU.  Why?  You'll see why when I explain GEGL
buffers.  That's next.  Also in the next installment, We will look at
ways of integrating GPU-support into GEGL buffers to really extract
the GPU's (evil) power.

Until next time!  :)


Kind regards,
Daerd

P.S.  Sorry for the really lengthy explanation.  I know that pretty
much most of you know about these.  These are just notes to explain
things better.  Or just consider this to be representative of what I
currently know.  (Really, I'm just making excuses.  Hehe..  :)  Please
correct me where I am wrong.


[1] http://www.gegl.org/

[2] From the GEGL website, a pad is the part of a node that exchanges
image content.  The place where image "pipes" are used to connect the
various operations in the composition.

[3] Look forward to the next article, as I explain GEGL buffer's architecture.

[4] Not to be confused with GEGL processors.  GEGL uses processor
functions to implement alternative operation executions.  For example,
when a machine supports SIMD CPU extensions, GEGL will use the
processor function for SIMD when it is available from the current
operation.

[5] Though I really don't know if it's possible to create a processor
function whose signature is slightly different from the default
processor function.  I need your wisdom here, GEGL gods.  ;)

[6] I didn't tell you that OpenGL is a state machine.  But, really, it
is.  See OpenGL's Wikipedia entry, if you're interested.
_______________________________________________
Gegl-developer mailing list
Gegl-developer@xxxxxxxxxxxxxxxxxxxxxx
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gegl-developer

[Index of Archives]     [Yosemite News]     [Yosemite Photos]     [gtk]     [GIMP Users]     [KDE]     [Gimp's Home]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux