On 2011-01-18 14:49 +0000, Tony Houghton wrote: > I still can't translate that explanation into simple mechanics. Is > temporal like weave and spatial like bob or the other way round? Or > something a little more sophisticated, interpolating parts of the > picture belonging to the "wrong" field from previous and/or next frames? "Temporal 1x" weaves the parts of the frame that aren't combed (stationary objects) and interpolates one of the fields to fill the combed parts. I don't think it uses temporal information from other fields while interpolating. That would result in blurry video without motion compensation, which is too heavy at least for low-end GPUs. The output rate for 50 Hz interlaced video is 25 fps. "Temporal 2x" does the same but outputs one frame for each input field, keeping full temporal and spatial resolution. Output rate is 50 fps. "Temporal spatial 1x" does the same as "temporal 1x" but it smoothes the rough diagonal edges in interpolated parts of the frame. Output rate is 25 fps. "Temporal spatial 2x" does the same as "temporal 2x" but it smoothes the edges. Output rate is 50 fps. So the "temporal" part refers to motion-adaptiveness, or some kind of combing detection in a weaved frame. I haven't written a deinterlacer myself, so can't say what the used methods are exactly. If you want to know more about the "spatial" part of these filters, search for Edge-Directed Interpolation (EDI). Yadif uses a similar technique. --Niko _______________________________________________ vdr mailing list vdr@xxxxxxxxxxx http://www.linuxtv.org/cgi-bin/mailman/listinfo/vdr