Thanks Nicolas. I've seen the video just this morning. As expected, it turned out to be my general definition of "awesomeness". He discussed quite a few pitfalls that should apply and not apply to GEGL. Quite particular is pitfall #4 where he got ~500ms to render the image but got it transfered from the GPU back to the CPU in ~3 seconds! That's just crazy! Hopefully, we will be able to dodge that issue by uploading textures whilst the GPU is computing. This needs the rectangle-level CPU parallelization we are discussing as of the moment. I am now convinced that the latter parallelization is needed to extract the GPU's (evil) power. Another issue not discussed by the talk is OpenGL texture formats. I suspect (though I'm not quite sure, like I always am :) that Nona-GPU doesn't deal with different color formats. OpenGL can only process 1-4 tuples of vector data in the shader level, so I can't imagine how we can provide support for Babl formats with channels exceeding 4. Also, current generation GPUs don't support double-precision floating point data. Providing support for RGBA doubles, for example, is difficult. Kind regards, Daerd _______________________________________________ Gegl-developer mailing list Gegl-developer@xxxxxxxxxxxxxxxxxxxxxx https://lists.XCF.Berkeley.EDU/mailman/listinfo/gegl-developer