Re: [FilmGimp] Re: [Gimp-developer] Film Gimp and GIMP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09-Dec-2002, Stephen J Baker wrote: 
> I'm not suggesting that this would be useful to GIMP - but that other
> developers who are working in 3D using modern rendering hardware will
> soon need support for 32 bit floating point texture maps.
> 
> So, I was pointing out that floating point imagery is soon going to
> be important to many other user communities outside of the film industry
> and it follows that floating point images ought to be loadable, editable
> and save-able from within mainstream GIMP.
> 
> IMHO, that's a better route to take than going to 16 bit or even integer
> 32 bit.

Im not saying Gimp3D here: Im just saying using GL as an advanced framebuffer.
Unless X (or win32) itself supports the spfp bitdepths, then our only recourse
would be to use GL textures as a framebuffer to display the image.

> > Ive been asking for spfp per channel rendering for a totally different reason:
> > not only can you have numbers above pure white (> 1.0) and below pure black
> > (< 0.0), but you can properly use SSE to accelerate FP calculations (using gcc
> > 3.2.x and up with -msse and -mfpu=sse,387. On my Intel P3, apps that heavly
> > used spfp math had a speed increase of 2x-4x, all due to the extra execution
> > units chugging along.)
> 
> You could use a modern graphics pipeline for that too - but it's a lot less
> friendly to code for - and it won't port to all graphics cards - so it's
> probably not likely to be a thing that GIMP would want to make use of.
> 
> On something like an ATI Radeon 9700 or the upcoming nVidia GeForceFX,
> you can create floating point texture maps - and use the incredibly
> fast 'fragment shader' processor to composite, scale, rotate, perspect,
> tile or otherwise process them into the floating point frame buffer,
> then read that back into the CPU at the end.  Whether that's faster
> than doing it in the CPU alone depends on the complexity of the
> per-pixel processing - for complex per-pixel operations, I'd expect
> the graphics card to be able to beat the CPU - but for simple operations
> the data transfer overheads into and out of the graphics card would
> kill you.
> 
> The nVidia card also supports a 16 bit 'half float' format which would
> be interesting for HDR.

All of thats basically worthless to us _except_ for non-real "preview" modes
where it doesnt matter if the image looks perfect, because its just an
approximation. We cant use it for real rendering, because an xcf has to look
the same on _all_ machines that view it. That means no matter what video card
I have, it has to look the same on someone else's box, no matter what video
card he/she has.

The half float mode might be slightly useful for imitating a 16-bit per channel
display. But that goes back to using gl textures as a framebuffer.
 
> There were a bunch of papers at SigGraph last year about rendering
> HDR images on a standard display without losing important visual information.
> 
> All interesting stuff.
> 

That wouldnt make alot of sense. HDR isnt ment for "displaying." Its ment for
holding extra data that otherwise would be lost. For a final end target (eg
png, jpg, dvd) the extra HDR data is thrown out because it is no longer needed.

-- 
Patrick "Diablo-D3" McFarland || unknown@xxxxxxxxx
"Computer games don't affect kids; I mean if Pac-Man affected us as kids, we'd 
all be running around in darkened rooms, munching magic pills and listening to
repetitive electronic music." --Kristian Wilson, Nintendo, Inc, 1989

Attachment: pgpOQX05UnFsf.pgp
Description: PGP signature


[Index of Archives]     [Video For Linux]     [Photo]     [Yosemite News]     [gtk]     [GIMP for Windows]     [KDE]     [GEGL]     [Gimp's Home]     [Gimp on GUI]     [Gimp on Windows]     [Steve's Art]

  Powered by Linux