Re: GPU support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Thanks for your answer.

No problem.  :)

> Yes, I hope that all needed operations can be done via OpenGL, too because I
> have already experienced that CUDA sometimes makes problems even on supported
> platforms if you don't have exactly the same compiler options as the samples
> etc. On the other side, it's an additional unnecessary layer because the
> OpenGL computations use the same things you can directly access via CUDA or
> equivalent SDKs (if I have interpreted the infos I read about CUDA correctly).
> This layer is optimized for rendering and not for computing, so - from a pure
> simplicistic view - CUDA would be easier and more performant than OpenGL.
>
> I have never heard of OpenCL, but it seems to be very interesting (and not
> proprietary). I hope that it will be available for the most important
> platforms, soon.

Really, my only concern with CUDA (and Stream SDK) is that they are
too vendor-specific (I'd loved to be proven wrong though).  So,
implementing GPU-support through them is a no-go, afaic.  I'm not
willing to implement two GPU-enabled GEGL branches, one for NVIDIA and
one for ATI video cards.  That would be too much hassle and would not
fit sensibly into GSoC's timeline of 3 months.  I am willing to
develop for GIMP and GEGL after GSoC but I would prefer my targets
realistic.  I really do agree that coding against CUDA and similar
APIs is easier than coding against OpenGL, I'm just not convinced that
the latter is an 'unnecessary' investment given our particular case.
The only API that I would really approve of is OpenCL, but as I have
previously said, OpenCL implementations are still in their infancy.

Nevertheless, thank you for your concern.  We are of the same spirit.
I just hope that we have a mature, non-proprietary, vendor-neutral,
platform-agnostic solution here and now.  But I'm forced to use OpenGL
as it is what is currently available, imho.  But rest assured, I will
commit myself into porting my OpenGL implementation into OpenCL as
soon as implementations for the latter becomes available.  I also hope
that I'd have help by that time.  :)

> Basically it's just the same as normal CPU threads, which would IMO be
> important to support, too. Many (especially Linux) systems don't have graphics
> acceleration available, but more than one CPU. Today you get already 8 cores,
> i.e. filters would be ~8 times faster if they utilised all CPUs (for at least
> 8 available rectangles).

Implementing multi-threading in GEGL is out of my scope and I'm not
even sure if it's in GEGL's scope.  GEGL is pretty low-level and
threading can be implemented on top of GEGL by the relevant client
(i.e. GIMP).  Furthermore, be aware that threading doesn't really map
into multiple cores and not using threads doesn't really mean that the
code will not be parallelized[1].

Currently, the changes that I propose to introduce GPU-support to GEGL
will be added to GEGL's existing code paths so that when OpenGL and/or
hardware acceleration isn't available, we will be able to fallback to
plain software rendering (using the CPU).

> Yes I have found the relevant code sections in the GEGL source (at least I
> think that I have ;) ), and they seem to have the linear structure I have
> mentioned in my first email. I mean that gegl_processor_work() returns a
> boolean telling you if there is some more work to do, in which case you have
> to call it again. I think this is not usable for parallelism, because you
> always have to wait until the current packet (processed by
> gegl_processor_work()) is finished until you know if you have to call it
> again. For parallelism, a better approach would be that gegl_process_work()
> does everything at once, for instance by moving the outer while loop over the
> rectangles into in inner for loop.

I'm not really an expert with regards to how GEGL uses threading
frameworks (if at all) to parallelize tile/rectangle-level operations.
 My work will be concerned with accelerating pixel-level operations by
parallelizing each pixel operation in the GPU.  The key in my work is
that all pixel operations should (theoretically) work in parallel.
I'm not in the position to think about parallelizing tiles/rectangles
though I suspect that parallelization in that area is limited because
of GEGL's complex graph dependencies.  I'd appreciate it if a GEGL
expert steps up and clarify these issues.

[1] GCC, for example, will later provide automatic parallelization
through the Graphite framework.  Please see:
http://blog.internetnews.com/skerner/2009/04/gcc-44-improves-open-source-co.html
and http://www.phoronix.com/scan.php?page=news_item&px=NzIyMQ.
_______________________________________________
Gegl-developer mailing list
Gegl-developer@xxxxxxxxxxxxxxxxxxxxxx
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gegl-developer

[Index of Archives]     [Yosemite News]     [Yosemite Photos]     [gtk]     [GIMP Users]     [KDE]     [Gimp's Home]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux