Introduction to GPU concepts & GEGL GPU-support Architecture #1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all,


Since I still don't have an internet connection at home, I decided to
just screw the waiting and post a series of articles here.  These
articles will serve three purposes (among others, of course): (1) to
let you into the details of what I am proposing, (2) to solicit
suggestions from the GIMP community on what directions I should take
and (3) to introduce you to the GPU, OpenGL and (for starters) GEGL
(or if you're just a casual mailing list reader, just like what I used
to be, scrap them all and just put entertainment there :).  So yeah,
here goes...

GPUs have SIMD (Single Instruction, Multiple Data) processors.  For
our purposes, this means that GPUs can process many pixels all at the
same time.

Using the GPU over a typical image operation, we implement a piece of
code called a pixel shader[1] which operates on a single pixel in the
image.  We then load the image into GPU memory (i.e. OpenGL textures)
and tell OpenGL to execute the shader over the textures.  It is
important to note that image data must reside in GPU memory for the
image to be accessed by the shader.

Because each shader has limited access on the entire image (in OpenGL,
a shader has access only to a single pixel) the GPU is free to
distribute the image pixels to different processor units that will
operate on them in parallel.  In general computing, this is called
data parallelism[2].  Data parallelism promises massive performance
boosts to the appropriate programs[3].

Image transformations in GEGL are implemented through GEGL operations.
 A typical GEGL operation receives a rectangular region of memory
containing the current pixels to be processed, transforms those pixels
appropriately and sends them back to the caller (usually, a GEGL
node[4]).

It is easy to see that GEGL operations are related to pixel shaders.
In fact, more likely than not, pixel shaders will reside inside GEGL
operations.  I had to put the "more likely than not" phrase there
because there is a way for GEGL core to use shaders to accelerate
tasks other than operations.  More about this later.

All these brings us to our first and most basic architecture.  To
introduce basic GPU-support into GEGL, we only have to let each
operation handle all the necessary OpenGL pleasantries.  That is, an
operation will implement a shader, initialize an OpenGL context,
create textures for the pixels, tell OpenGL to execute the shader over
the texture, transfer the texture back to main memory, destroy the
OpenGL context and return the output pixels.

Needless to say (but I'm saying it anyway :), aside from the fact that
we are only introducing bloat to each operation, we also aren't making
full use of the GPU.  Before I explain why, an introduction to GEGL's
architecture is in order...

This is getting lengthy (and personally, I don't like lengthy mailing
list posts :).  So, hold your breath and please stay tuned for the
next installment[5].


Kind regards,
Daerd


[1] Fragment shader in OpenGL jargon.  Note that we continue to use
the term 'shader' when we could've used the term 'kernel'.  Though
we're using GPGPU concepts, GEGL is still a library that operates on
pixels.  Kernel is a more accurate term in GPGPU to describe
computation over parallel data.

[2] http://www.wikipedia.org/Data_Parallelism/

[3] Of course, not all programs are data parallelizable (if there is
such a word).  Luckily for us, GEGL's pixel operations seem to fit
perfectly in data parallelism.

[4] More details on this in the next article.

[5] Expect two more articles.  Thank you.
_______________________________________________
Gegl-developer mailing list
Gegl-developer@xxxxxxxxxxxxxxxxxxxxxx
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gegl-developer

[Index of Archives]     [Yosemite News]     [Yosemite Photos]     [gtk]     [GIMP Users]     [KDE]     [Gimp's Home]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux