Re: [Gegl-developer] how well suited is gegl for realtime graphics

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Jelle Herold <jelle@xxxxxxxxx> [031127 22:12]:
> Hi,
> 
> I've been following this list for a while, and experimenting some
> with gegl. Now, my question is how well suited is (or rather, will be)
> Gegl for realtime 2D graphics. I need to build processing graphs such as
> 
> some ffmpeg decoders -> transformations, masks, convolutions -> sdl display
> 
> the graphs may end up quite large and the filter parameters will be
> changed almost every step, the resultion is low/normal (eg. 640x480).
> 
> Or will I be better off using something like the OpenGL Imaging path?

short version: 

for advanced processing, with todays technology,. probably no.
	
long version:

This will of course depend on both the complexity of the graph, and the
hardware available. Hardware acceleration of gegl is an isssue that
hasn't been discussed much, but it would be nice if the architecture
allowed using opengl in the special cases where it would be feasible.

The usage of high precision sample models,. e.g. 16bit or floating
point, would be unneeded for realtime display. I am myself experimenting
with an architecture that I see as a testing ground for things I want to
implement in gegl, and realtime processing where the nodes in the graph
are ffmpeg decoding, filters, text generation etc,. sdl display,. and
you either have to limit yourself to simple effects, or extend the
system to allow parallell processing, or hardware acceleration.

(For reference the system I've been testing on is a 800mhz transmeta
cursoe, and a Pentium IV 1.6ghz, both with 512 mb of memory, for
realtime sizes, memory has never been an issue.)

The main idea I have for paralell processing, is to split the processing
graph amongst the clients. If your graph is something similar to:

<?xml version='1.0' encoding='utf-8'?>
<gegl>
	<node name='empty_studio' filter='load_png'      >
		<att name='file' value='empty_studio.png'   />
	</node>
	<node name='video_feed' filter='load_video4linux'>
		<att name='device' value='/dev/video0'      />
		<att name='width'  value='640'              />
		<att name='height' value='480'              />
		<att name='fps'    value='25.0'             />
	</node>
	<node name='backdrop_movie' filter='load_ffmpeg' >
		<att name='file'   value='backdrop.mp4'     />
		<att name='frame'  value='4005112'          />
	</node>
	<node name='output' filter='display'             >
		<input source='final_image'                 />
	</node>
	<node name='chroma_matte' filter='matte_chroma' />
		<input pad='0' source='video_feed'          />
		<input pad='1' source='empty_studio'        />
		<att name='alpha_color' value='0.02 1.0 0.1'/>
		<att name='tolerance'   value='0.04'        />
	</node>
	<node name='mixed_version' filter='place'       />
	    <input pad='0' source='backdrop_movie'      />
		<input pad='1' source='chroma_matte'        />
		<att name='x" value='0.5'                   />
		<att name='y' value='0.5'                   />
	</node>
	<node name='text_overlay' filter='freetype'      >
	    <input source='mixed_version'               />
		<att name='font'   value='freesans.ttf'     />
		<att name='x'      value='0.5'              />
		<att name='y'      value='0.8'              />
		<att name='size'   value='0.01'             />
		<att name='string' value='realtime compositing - gegl' />
		<att name='valign' value='middle'           />
		<att name='halign' value='center'           />
		<att name='color'  value='1.0 1.0 1.0'      />
	</node>
	<node name='output_image' filter='display'       >
	    <input source='text_overlay'                />
	</node>
</gegl>

Or using ascii art:

                  video_feed  empty_studio
                   |   ________|
                   |  /      
  backdrop_movie  chroma_matte    
   |   ___________|
   |  /          
  mixed_version
   |
   |
  text_overlay
   |
   |
  output_image

The processing could be split across machines if the network latency is
low enough to transmit raw images in real time.

One way the processing could be split is:


                 .
 [machine 2]     .   video_feed  empty_studio
                 .   |   ________|
                 .   |  /      
  backdrop_movie .  chroma_matte    
   |   __________.__|
   |  /          .          [machine 1]
  mixed_version  .       
   |             .
................................................  
   |               
  text_overlay       [machine 3]
   |
   |
  output_image


the load on each machine would be:

machine 1:
  get data from framegrabber 
  make and integrate matte
  push data to network
machine 2:
  get data from disk
  decode mpeg4
  get data from network
  over composition
  push data to network
machine 3:
  get data from network
  render text overlay
  push data to screen

all the extra operations, mainly data pushing,. accounts to:
	push data to network  * 2
	get data from network * 2

Is this a really large workload? computing the raw image sizes assuming
a full noninterlaced PAL transmission,. 720x576 rgba 8bit unsigned, 25 fps.
a raw frame is 720 * 576 * 4 * 25 = 41472000   bytes/sec
                                  = 40500     kbytes/sec
                                  = 39.551    mbytes/sec
								  = 316.41      mbit/sec

At that rate I realize that clustering of machines in a 100mbit network,
would not work :(, unless you implement compression on the data, which
also is unlikely, this leads to me thinking that clustering would
probably be most useful for non realtime rendering, where the graphs are
really complex.

/Øyvind K.

-- 
  .^.
  /V\    Øyvind Kolås,  Gjøvik University College, Norway 
 /(_)\   <oeyvindk@xxxxxx>,<pippin@xxxxxxxxxxxxxxxxxxxxx>
  ^ ^    

[Index of Archives]     [Yosemite News]     [Yosemite Photos]     [gtk]     [GIMP Users]     [KDE]     [Gimp's Home]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux