Re: Replacing aging VDR for DVB-S2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2011-01-15 22:36 +0000, Tony Houghton wrote:

> I wonder whether it might be possible to use a more eonomical card
which
> is only powerful enough to decode 1080i without deinterlacing it and
> take advantage of the abundant CPU power most people have nowadays to
> perform software deinterlacing. It may not be possible to have
something
> as sophisticated as NVidia's temporal + spatial, but some of the
> existing software filters should scale up to HD without overloading
the
> CPU seeing as it wouldn't be doing the decoding too.

It's possible, but realtime GPU deinterlacing is more energy-efficient:

- For CPU deinterlacing, you'd need something like Greedy2Frame or
TomsMoComp. They should give about the same quality as Nvidia's temporal
deinterlacer, but the code would need to be threaded to support
lower-frequency multicore CPUs.

Yadif almost matches temporal+spatial in quality, but it will also be
about 50% slower than Greedy2Frame.

- Hardware-decoded video is already in the GPU memory and moving
1920x1080-pixel frames around is not free.

- Simple motion-adaptive, edge-interpolating deinterlacing can be easily
parallelized for GPU architectures, so it will be more efficient than on
a serial processor. For example, GT 220 can do 1080i deinterlacing at
more than 150 fps (output). Normal 50 fps deinterlacing only causes
partial load and power consumption. GT 430 is currently worse because of
an unoptimized filter implementation:
http://nvnews.net/vbulletin/showthread.php?p=2377750#post2377750

Still, only the latest CPU generation can reach that kind of performance
with a highly optimized software deinterlacer.

> 
> Alternatively, use software decoding, and hardware deinterlacing.

GPU video decoding is very efficient thanks to dedicated hardware. I'd
guess that current chips only use about 3 Watts for high-bitrate
1080i25.

Also, decoding and filtering aren't executed on the same parts of the
GPU chip. They are almost perfectly parallel processes, so combined
throughput will be that of the slower process.


> Somewhere on linuxtv.org there's an article about using fairly simple
> OpenGL to mimic what happens to interlaced video on a CRT, but I don't
> know how good the results would look.

Sounds like normal bobbing with interpolation. Even if it simulates a
phosphor delay, it probably won't look much better than MPlayer's -vf
tfields or the bobber in VDPAU.

Sharp interlaced (and progressive) video is quite flickery on a CRT too.


> BTW, speaking of temporal and spatial deinterlacing: AFAICT one means
> combining fields to provide maximum resolution with half the frame
rate
> of the interlaced fields, and the other maximises the frame rate while
> discarding resolution; but which is which? And does NVidia's temporal
+
> spatial try to give the best of both worlds through some sort of
> interpolation?

Temporal = motion adaptive deinterlacing at either half or full field
rate. Some programs refer to the latter by "2x". "Motion adaptive" means
that the filter detects interlaced parts of each frame and adjusts
deinterlacing accordingly. This gives better quality at stationary
parts.

Temporal-spatial = Temporal with edge-directed interpolation to smooth
jagged edges of moving objects.

Both methods give about the same spatial and temporal resolution but
temporal-spatial will look nicer.

--Niko



_______________________________________________
vdr mailing list
vdr@xxxxxxxxxxx
http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr


[Index of Archives]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Util Linux NG]     [Xfree86]     [Big List of Linux Books]     [Fedora Users]     [Fedora Women]     [ALSA Devel]     [Linux USB]

  Powered by Linux