Re: [RFC/RFT PATCH 0/3] arm64: KVM: work around incoherency with uncached guest mappings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 04, 2015 at 01:43:02PM +0100, Ard Biesheuvel wrote:
> On 4 March 2015 at 13:29, Catalin Marinas <catalin.marinas@xxxxxxx> wrote:
> > On Wed, Mar 04, 2015 at 12:50:57PM +0100, Ard Biesheuvel wrote:
> >> On 4 March 2015 at 12:35, Catalin Marinas <catalin.marinas@xxxxxxx> wrote:
> >> > On Mon, Mar 02, 2015 at 06:20:19PM -0800, Mario Smarduch wrote:
> >> >> On 03/02/2015 08:31 AM, Christoffer Dall wrote:
> >> >> > However, my concern with these patches are on two points:
> >> >> >
> >> >> > 1. It's not a fix-all.  We still have the case where the guest expects
> >> >> > the behavior of device memory (for strong ordering for example) on a RAM
> >> >> > region, which we now break.  Similiarly this doesn't support the
> >> >> > non-coherent DMA to RAM region case.
> >> >> >
> >> >> > 2. While the code is probably as nice as this kind of stuff gets, it
> >> >> > is non-trivial and extremely difficult to debug.  The counter-point here
> >> >> > is that we may end up handling other stuff at EL2 for performanc reasons
> >> >> > in the future.
> >> >> >
> >> >> > Mainly because of point 1 above, I am leaning to thinking userspace
> >> >> > should do the invalidation when it knows it needs to, either through KVM
> >> >> > via a memslot flag or through some other syscall mechanism.
> >> >
> >> > I expressed my concerns as well, I'm definitely against merging this
> >> > series.
> >>
> >> Don't worry, that was never the intention, at least not as-is :-)
> >
> > I wasn't worried, just wanted to make my position clearer ;).
> >
> >> I think we have established that the performance hit is not the
> >> problem but the correctness is.
> >
> > I haven't looked at the performance figures but has anyone assessed the
> > hit caused by doing cache maintenance in Qemu vs cacheable guest
> > accesses (and no maintenance)?
> >

I'm working on a PoC of a QEMU/KVM cache maintenance approach now.
Hopefully I'll send it out this evening. Tomorrow at the latest.
Getting numbers of that approach vs. a guest's use of cached memory
for devices would take a decent amount of additional work, so won't
be part of that post. I'm actually not sure we should care about
the numbers for a guest using normal mem attributes for device
memory - other than out of curiosity. For correctness this issue
really needs to be solved 100% host-side. We can't rely on a
guest to do different/weird things, just because it's a guest.
Ideally guests don't even know that they're guests. (Even if we
describe the memory as cache-able to the guest, I don't think we
can rely on the guest believing us.)

drew
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux