On Thu, Oct 17, 2019 at 4:19 PM Gerd Hoffmann <kraxel@xxxxxxxxxx> wrote: > > Hi, > > > That said, Chrome OS would use a similar model, except that we don't > > use ION. We would likely use minigbm backed by virtio-gpu to allocate > > appropriate secure buffers for us and then import them to the V4L2 > > driver. > > What exactly is a "secure buffer"? I guess a gem object where read > access is not allowed, only scanout to display? Who enforces this? > The hardware? Or the kernel driver? In general, it's a buffer which can be accessed only by a specific set of entities. The set depends on the use case and the level of security you want to achieve. In Chrome OS we at least want to make such buffers completely inaccessible for the guest, enforced by the VMM, for example by not installing corresponding memory into the guest address space (and not allowing transfers if the virtio-gpu shadow buffer model is used). Beyond that, the host memory itself could be further protected by some hardware mechanisms or another hypervisor running above the host OS, like in the ARM TrustZone model. That shouldn't matter for a VM guest, though. > > It might make sense for virtio-gpu to know that concept, to allow guests > ask for secure buffers. > > And of course we'll need some way to pass around identifiers for these > (and maybe other) buffers (from virtio-gpu device via guest drivers to > virtio-vdec device). virtio-gpu guest driver could generate a uuid for > that, attach it to the dma-buf and also notify the host so qemu can > maintain a uuid -> buffer lookup table. That could be still a guest physical address. Like on a bare metal system with TrustZone, there could be physical memory that is not accessible to the CPU. Best regards, Tomasz