Re: [PATCH 2/2] KVM: Prevent internal slots from being COWed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 22, 2010 at 02:17:44PM +0300, Avi Kivity wrote:
> On 06/21/2010 11:23 PM, Marcelo Tosatti wrote:
> > On Mon, Jun 21, 2010 at 11:18:13AM +0300, Avi Kivity wrote:
> >    
> >> If a process with a memory slot is COWed, the page will change its address
> >> (despite having an elevated reference count).  This breaks internal memory
> >> slots which have their physical addresses loaded into vmcs registers (see
> >> the APIC access memory slot).
> >>
> >> Signed-off-by: Avi Kivity<avi@xxxxxxxxxx>
> >> ---
> >>   arch/x86/kvm/x86.c |    5 +++++
> >>   1 files changed, 5 insertions(+), 0 deletions(-)
> >>
> >> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> >> index 33156a3..d9a33e6 100644
> >> --- a/arch/x86/kvm/x86.c
> >> +++ b/arch/x86/kvm/x86.c
> >> @@ -5633,6 +5633,11 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
> >>   				int user_alloc)
> >>   {
> >>   	int npages = memslot->npages;
> >> +	int map_flags = MAP_PRIVATE | MAP_ANONYMOUS;
> >> +
> >> +	/* Prevent internal slot pages from being moved by fork()/COW. */
> >> +	if (memslot->id>= KVM_MEMORY_SLOTS)
> >> +		map_flags = MAP_SHARED | MAP_ANONYMOUS;
> >>
> >>   	/*To keep backward compatibility with older userspace,
> >>   	 *x86 needs to hanlde !user_alloc case.
> >>      
> > Forgot to use map_flags below.
> >
> >    
> 
> Ouch, corrected and applied.

I think I tracked down the corruption during swapping with THP enabled
to this bug. The real bug is that the mmu notifier fires (it's not
like fork isn't covered by the mmu notifier) but KVM ignores it and
keeps writing to the old location. Shared pages can also be swapped
out and if the dirty bit on the spte isn't set faster than the time it
takes to write the page, the page can be relocated. Basically if
do_swap_page decides to make a copy of the page (like in ksm-swapin
case, erratically triggered now even for non-ksm pages in current
upstream by a bug in the new anon-vma code which I fixed already in
aa.git)  and the dirty bit on the spte is ignored because of lumpy
reclaim (which also I removed now and that makes the bug stop
triggering too), eventually what happens is that the page is unmapped
and during swapin it is relocated to a different page.

The bug really is in KVM that ignores the mmu_notifier_invalidate_page
and keeps using the old page.

It should have rang a bell that fork was breaking anything... fork
must not break anything since KVM is mmu notifier
capable. MADV_DONTFORK must only be a performance optimization
now. And the above change should be unnecessary (and I doubt the above
really fixes the swapping case as tmpfs can also be swapped out, at
least unless the page is pinned).

The way I'd like to fix it is to allocate those magic pages by hand
and not add them to lru and have page->mapping null. Then they will
remain pinned in the pte, and all problems will go away.

The other way would be to have a lookup hashtable that when mmu
notifier invalidate fires, we lookup the hash and we call a method to
have kvm stop using the page. And then something is needed during the
page fault, if the gfn in the hash is paged-in another method is
called to set the magic host user address to point the new pfn.

I think pinning the pages and allocating them by hand is simpler,
hopefully we can do it in a way that munmap will collect them
automatically like now.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux