Re: [virtio-dev] [PATCH] [RFC RESEND] vdec: Add virtio video decode device specification

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Oct 17, 2019 at 4:44 PM Gerd Hoffmann <kraxel@xxxxxxxxxx> wrote:
>
>   Hi,
>
> > > Also note that the guest manages the address space, so the host can't
> > > simply allocate guest page addresses.
> >
> > Is this really true? I'm not an expert in this area, but on a bare
> > metal system it's the hardware or firmware that sets up the various
> > physical address allocations on a hardware level and most of the time
> > most of the addresses are already pre-assigned in hardware (like the
> > DRAM base, various IOMEM spaces, etc.).
>
> Yes, the firmware does it.  Same in a VM, ovmf or seabios (which runs
> inside the guest) typically does it.  And sometimes the linux kernel
> too.
>
> > I think that means that we could have a reserved region that could be
> > used by the host for dynamic memory hot-plug-like operation. The
> > reference to memory hot-plug here is fully intentional, we could even
> > use this feature of Linux to get struct pages for such memory if we
> > really wanted.
>
> We try to avoid such quirks whenever possible.  Negotiating such things
> between qemu and firmware can be done if really needed (and actually is
> done for memory hotplug support), but it's an extra interface which
> needs maintenance.
>
> > > Mapping host virtio-gpu resources
> > > into guest address space is planned, it'll most likely use a pci memory
> > > bar to reserve some address space.  The host can map resources into that
> > > pci bar, on guest request.
> >
> > Sounds like a viable option too. Do you have a pointer to some
> > description on how this would work on both host and guest side?
>
> Some early code:
>   https://git.kraxel.org/cgit/qemu/log/?h=sirius/virtio-gpu-memory-v2
>   https://git.kraxel.org/cgit/linux/log/?h=drm-virtio-memory-v2
>
> Branches have other stuff too, look for "hostmem" commits.
>
> Not much code yet beyond creating a pci bar on the host and detecting
> presence in the guest.
>
> On the host side qemu would create subregions inside the hostmem memory
> region for the resources.
>
> Oh the guest side we can ioremap stuff, like vram.
>
> > > Hmm, well, pci memory bars are *not* backed by pages.  Maybe we can use
> > > Documentation/driver-api/pci/p2pdma.rst though.  With that we might be
> > > able to lookup buffers using device and dma address, without explicitly
> > > creating some identifier.  Not investigated yet in detail.
> >
> > Not backed by pages as in "struct page", but those are still regular
> > pages of the physical address space.
>
> Well, maybe not.  Host gem object could live in device memory, and if we
> map them into the guest ...

That's an interesting scenario, but in that case would we still want
to map it into the guest? I think in such case may need to have some
shadow buffer in regular RAM and that's already implemented in
virtio-gpu.

>
> > That said, currently the sg_table interface is only able to describe
> > physical memory using struct page pointers.  It's been a long standing
> > limitation affecting even bare metal systems, so perhaps it's just the
> > right time to make them possible to use some other identifiers, like
> > PFNs?
>
> I doubt you can handle pci memory bars like regular ram when it comes to
> dma and iommu support.  There is a reason we have p2pdma in the first
> place ...

The thing is that such bars would be actually backed by regular host
RAM. Do we really need the complexity of real PCI bar handling for
that?

Best regards,
Tomasz



[Index of Archives]     [Linux Input]     [Video for Linux]     [Gstreamer Embedded]     [Mplayer Users]     [Linux USB Devel]     [Linux Audio Users]     [Linux Kernel]     [Linux SCSI]     [Yosemite Backpacking]

  Powered by Linux