>> Raspberry Pi displaying video with subtitles or other controls. I was >> thinking of the fullscreen case but if zero copy video can be made to >> work to the main desktop then that would even better. >> >> If displaying 4k video the Pi does not have enough bandwidth left for a >> single frame copy, convert or merge so I need hardware scaling, >> composition & display taking the raw video frame (its in a dmabuf). The >> raw video is in a somewhat unique format, I'd expect the other layers to >> be ARGB. The Pi h/w can do this and I believe I can make it work via >> DRM if I own the screen so that was where I started. >> >> >Why not use an xdg_toplevel and wl_subsurface? >> >> Probably because I am woefully underinformed about how I should be doing >> stuff properly. Please feel free to point me in the correct direction - >> any example that takes NV12 video (it isn't NV12 but if NV12 works then >> SAND can probably be made to too) would be a great start. Also Wayland >> hasn't yet come to the Pi though it will shortly be using mutter. > >By SAND do you mean one of these vc4-specific buffer tilings [1]? e.g. >BROADCOM_SAND64, SAND128 or SAND256? > >[1]: https://drmdb.emersion.fr/formats?driver=vc4 Yes - for SAND8 (or SAND128 in your terms) drm output we have the required types as NV12 + a broadcom modifier. Then there is SAND30 for 10-bit output which fits in the same column tiling but packs 3 10-bit quantities into 32 bits with 2 junk (zero) bits. Again we have a DRM definition for that which I think may have made it upstream. >The fullscreen case may work already on all major Wayland compositors, >assuming the video size matches exactly the current mode. You'll need to use >the linux-dmabuf Wayland extension to pass NV12 buffers to the compositor. > >If you want to add scaling into the mix, you'll need to use the viewporter >extension as well. Most compositors aren't yet rigged up for direct scan-out, >they'll fall back to composition. Weston is your best bet if you want to try >this, it supports direct scan-out to multiple KMS planes with scaling and >cropping. There is some active work in wlroots to support this. I'm not aware >of any effort in this direction for mutter or kwin at the time of writing. > >If you want to also use KMS planes with other layers (RGBA or something else), >then you'll need to setup wl_subsurfaces with the rest of the content. As said >above, Weston will do its best to offload the composition work to KMS planes. >You'll need to make sure each buffer you submit can be scanned out by the >display engine -- there's not yet a generic way of doing it, but the upcoming >linux-dmabuf hints protocol will fix that. > >If you want to get started, maybe have a look at clients/simple-dmabuf-gbm in >Weston. > >Hope this helps! Very many thanks for the pointers - to a large extent my problem is that I don't know what should work in order to build something around it and then work out why it doesn't. I've got video decode down pat, but modern display still eludes me - I grew up on STBs and the like where you could just use the h/w directly, now its a lot more controlled. Ta again John Cox