Re: Introduction to GEGL Buffers

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



In GEGL, image data that needs processing is subdivided into
rectangles (I'm not sure how these rectangles map into GeglTiles
though).  Currently, to process a portion of the image data, a
GeglNode iterates over the rectangles and sequentially instructs the
relevant operation code to execute over those rectangles.

There are two forms of parallelization that we can tackle above.  One
is to let an operation process all pixels of the rectangle in
parallel, as in my project's case.  The other is to get rid of the
sequential manner of processing rectangles.  That is, different
rectangles are assigned to different CPU threads that call the same
operation.  Moreover, a thread can be instructed to process its
rectangle in the GPU or in the CPU enabling us to use different
computation devices in parallel.

I could be wrong, but my intuition tells me that implementing the
latter parallelization is difficult to tackle in my GSoC's timeframe.
Especially because locking issues (as Øyvind mentioned before) are
time-consuming to debug and fix.

The reason why a certain level of communication is required to
implement both projects is that we need to agree on where tiles and
rectangular pixel data are stored/cached (GPU memory vs. main memory)
and when.

As utkarsh expressed previously, we should be more concerned about how
and when pixel data are transfered from main memory to the GPU.
Transfer operations from one computation device to another is
expensive.

My current plans (as outlined in a different post) assume that a
single computation device will be used for each GeglNode and its
GeglOperation.  A node whose associated operation publishes
GPU-support will create GPU-based buffers only, CPU/RAM-based buffers
otherwise.  It doesn't matter what computation device is used by the
neighboring nodes as they will be concerned with their own
computations and transfers can still be taken cared of by the relevant
accessor methods.

Cached GeglTiles in my current plans are always stored in GPU memory
for GPU-based buffers, hence, the need for a GeglTileFactory to
decouple tile creation from tile access.  I am sure that this
mechanism will disappear when we move to rectangle-level
parallelization, but still, how and when do we store tiles in GPU
memory then?

Øyvind, I'd appreciate more thoughts regarding the unified memory
architecture you were talking about.  For example, does your thoughts
involve how and where pixel data are stored internally?  Or is it just
the access you are talking about?


Kind regards,
Daerd
_______________________________________________
Gegl-developer mailing list
Gegl-developer@xxxxxxxxxxxxxxxxxxxxxx
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gegl-developer


[Index of Archives]     [Yosemite News]     [Yosemite Photos]     [gtk]     [GIMP Users]     [KDE]     [Gimp's Home]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux