> Oliver Seitz <info <at> vtnd.de> writes: > >> One problem that might occur is that most graphic cards can only >> accelerate one video stream at a time, so all but the first video has to >> be decoded, scaled and displayed entirely by the CPU. > > I cannot comment on OS X, but at least for NVIDIA's implementation of xv, > gl and > VDPAU, this is not correct. > Thank you for correcting me. I have to admit that I'm not up-to-date with recent graphics cards, on with NVIDIA, for that matter. But there are a number of low-powered or old graphics card (let's mention Matrox G550, ATI FireMV2200, VIA CN700, Intel G945) which don't do mutch besides YUV conversion and scaling, but they even don't do that on more than one video a time. (Some of them could do a bit more, but it depends a lot more on the video codec and such. Nothing as versatile as VDPAU right now.) So when writing software that is to be run on arbitrary machines it might be a good idea to at least avoid runtime scaling on more than one video simultaneously. I think we can agree on that, no? Greets, Kiste