Re: GPU support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I believe you are much off the track and you havent understood what you are supossed to do. It is not that we are forced to use OpenGl but it is that OpenGl is the best option. What you are telling is that OpenCl or Cuda can be used but that is purely wrong. Why you have to complicate procedures by using the OpenCL abstraction layer, of which there is no need as the things can be easily done using OpenGl. Infact I am sure you havent understood the whole of the problem at all. You feel for the support of the GPU since there is an API called OpenCL exists so you can user it. But it is not that way. You dont understand what OpenCL or CUDA is and I am sure you havent worked much upon it. Its a compute platform it provides API lot different that what is needed by GEGL. Infact as I previously said it is no way an option. I am sure you misunderstood the thing from the basics. You can create a minimap and use it to distribute the workload of the threads to either CPU using software rendering or Gpu using Hardware rendering or even to a network or something. I dont know why you are running out of the track.

Since you are Officially accepted for this project and I was Officially rejected so I expected you might had a well plan but you idea of using OpenCL or CUDA seems to me extremely Illogical. It would just not cause the increase the speed as can be got with basic implementation. This project is not a big and difficult task.


> Really, my only concern with CUDA (and Stream SDK) is that they are
> too vendor-specific (I'd loved to be proven wrong though).  So,
> implementing GPU-support through them is a no-go, afaic.  I'm not
> willing to implement two GPU-enabled GEGL branches, one for NVIDIA and
> one for ATI video cards.  That would be too much hassle and would not
> fit sensibly into GSoC's timeline of 3 months.  I am willing to
> develop for GIMP and GEGL after GSoC but I would prefer my targets
> realistic.  I really do agree that coding against CUDA and similar
> APIs is easier than coding against OpenGL, I'm just not convinced that
> the latter is an 'unnecessary' investment given our particular case.
> The only API that I would really approve of is OpenCL, but as I have
> previously said, OpenCL implementations are still in their infancy.

I really really feel that have you never been exposed to shaders or ever used GLSL or HLSL , because what I feel you dont have the understanding of the topic and it would really be too difficult for you to complete this.
I am working upon GPU for last 2.5 years and upon things like cloud computing since last year.



On Sun, May 10, 2009 at 4:32 PM, Jerson Michael Perpetua <jersonperpetua@xxxxxxxxx> wrote:
> Thanks for your answer.

No problem.  :)

> Yes, I hope that all needed operations can be done via OpenGL, too because I
> have already experienced that CUDA sometimes makes problems even on supported
> platforms if you don't have exactly the same compiler options as the samples
> etc. On the other side, it's an additional unnecessary layer because the
> OpenGL computations use the same things you can directly access via CUDA or
> equivalent SDKs (if I have interpreted the infos I read about CUDA correctly).
> This layer is optimized for rendering and not for computing, so - from a pure
> simplicistic view - CUDA would be easier and more performant than OpenGL.
>
> I have never heard of OpenCL, but it seems to be very interesting (and not
> proprietary). I hope that it will be available for the most important
> platforms, soon.

Really, my only concern with CUDA (and Stream SDK) is that they are
too vendor-specific (I'd loved to be proven wrong though).  So,
implementing GPU-support through them is a no-go, afaic.  I'm not
willing to implement two GPU-enabled GEGL branches, one for NVIDIA and
one for ATI video cards.  That would be too much hassle and would not
fit sensibly into GSoC's timeline of 3 months.  I am willing to
develop for GIMP and GEGL after GSoC but I would prefer my targets
realistic.  I really do agree that coding against CUDA and similar
APIs is easier than coding against OpenGL, I'm just not convinced that
the latter is an 'unnecessary' investment given our particular case.
The only API that I would really approve of is OpenCL, but as I have
previously said, OpenCL implementations are still in their infancy.


Nevertheless, thank you for your concern.  We are of the same spirit.
I just hope that we have a mature, non-proprietary, vendor-neutral,
platform-agnostic solution here and now.  But I'm forced to use OpenGL
as it is what is currently available, imho.  But rest assured, I will
commit myself into porting my OpenGL implementation into OpenCL as
soon as implementations for the latter becomes available.  I also hope
that I'd have help by that time.  :)

> Basically it's just the same as normal CPU threads, which would IMO be
> important to support, too. Many (especially Linux) systems don't have graphics
> acceleration available, but more than one CPU. Today you get already 8 cores,
> i.e. filters would be ~8 times faster if they utilised all CPUs (for at least
> 8 available rectangles).

Implementing multi-threading in GEGL is out of my scope and I'm not
even sure if it's in GEGL's scope.  GEGL is pretty low-level and
threading can be implemented on top of GEGL by the relevant client
(i.e. GIMP).  Furthermore, be aware that threading doesn't really map
into multiple cores and not using threads doesn't really mean that the
code will not be parallelized[1].

Currently, the changes that I propose to introduce GPU-support to GEGL
will be added to GEGL's existing code paths so that when OpenGL and/or
hardware acceleration isn't available, we will be able to fallback to
plain software rendering (using the CPU).

> Yes I have found the relevant code sections in the GEGL source (at least I
> think that I have ;) ), and they seem to have the linear structure I have
> mentioned in my first email. I mean that gegl_processor_work() returns a
> boolean telling you if there is some more work to do, in which case you have
> to call it again. I think this is not usable for parallelism, because you
> always have to wait until the current packet (processed by
> gegl_processor_work()) is finished until you know if you have to call it
> again. For parallelism, a better approach would be that gegl_process_work()
> does everything at once, for instance by moving the outer while loop over the
> rectangles into in inner for loop.

I'm not really an expert with regards to how GEGL uses threading
frameworks (if at all) to parallelize tile/rectangle-level operations.
 My work will be concerned with accelerating pixel-level operations by
parallelizing each pixel operation in the GPU.  The key in my work is
that all pixel operations should (theoretically) work in parallel.
I'm not in the position to think about parallelizing tiles/rectangles
though I suspect that parallelization in that area is limited because
of GEGL's complex graph dependencies.  I'd appreciate it if a GEGL
expert steps up and clarify these issues.

[1] GCC, for example, will later provide automatic parallelization
through the Graphite framework.  Please see:
http://blog.internetnews.com/skerner/2009/04/gcc-44-improves-open-source-co.html
and http://www.phoronix.com/scan.php?page=news_item&px=NzIyMQ.
_______________________________________________
Gegl-developer mailing list
Gegl-developer@xxxxxxxxxxxxxxxxxxxxxx
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gegl-developer



--
Utkarsh Shukla,
Btech, Material Science and Metallurgy,
IIT KANPUR.
Mob: 9936339580
_______________________________________________
Gegl-developer mailing list
Gegl-developer@xxxxxxxxxxxxxxxxxxxxxx
https://lists.XCF.Berkeley.EDU/mailman/listinfo/gegl-developer

[Index of Archives]     [Yosemite News]     [Yosemite Photos]     [gtk]     [GIMP Users]     [KDE]     [Gimp's Home]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux