Re: issues with emulated PCI MMIO backed by host memory under KVM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm going to ask some stupid questions here...

On Fri, Jun 24, 2016 at 04:04:45PM +0200, Ard Biesheuvel wrote:
> Hi all,
> 
> This old subject came up again in a discussion related to PCIe support
> for QEMU/KVM under Tianocore. The fact that we need to map PCI MMIO
> regions as cacheable is preventing us from reusing a significant slice
> of the PCIe support infrastructure, and so I'd like to bring this up
> again, perhaps just to reiterate why we're simply out of luck.
> 
> To refresh your memories, the issue is that on ARM, PCI MMIO regions
> for emulated devices may be backed by memory that is mapped cacheable
> by the host. Note that this has nothing to do with the device being
> DMA coherent or not: in this case, we are dealing with regions that
> are not memory from the POV of the guest, and it is reasonable for the
> guest to assume that accesses to such a region are not visible to the
> device before they hit the actual PCI MMIO window and are translated
> into cycles on the PCI bus. 

For the sake of completeness, why is this reasonable?

Is this how any real ARM system implementing PCI would actually work?

> That means that mapping such a region
> cacheable is a strange thing to do, in fact, and it is unlikely that
> patches implementing this against the generic PCI stack in Tianocore
> will be accepted by the maintainers.
> 
> Note that this issue not only affects framebuffers on PCI cards, it
> also affects emulated USB host controllers (perhaps Alex can remind us
> which one exactly?) and likely other emulated generic PCI devices as
> well.
> 
> Since the issue exists only for emulated PCI devices whose MMIO
> regions are backed by host memory, is there any way we can already
> distinguish such memslots from ordinary ones? If we can, is there
> anything we could do to treat these specially? Perhaps something like
> using read-only memslots so we can at least trap guest writes instead
> of having main memory going out of sync with the caches unnoticed? I
> am just brainstorming here ...

I think the only sensible solution is to make sure that the guest and
emulation mappings use the same memory type, either cached or
non-cached, and we 'simply' have to find the best way to implement this.

As Drew suggested, forcing some S2 mappings to be non-cacheable is the
one way.

The other way is to use something like what you once wrote that rewrites
stage-1 mappings to be cacheable, does that apply here ?

Do we have a clear picture of why we'd prefer one way over the other?

> 
> In any case, it would be good to put this to bed one way or the other
> (assuming it hasn't been put to bed already)
> 

Agreed.

Thanks for the mail!

-Christoffer
_______________________________________________
kvmarm mailing list
kvmarm@xxxxxxxxxxxxxxxxxxxxx
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm



[Index of Archives]     [Linux KVM]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux