Re: [RFC PATCH 0/2] Expose available KVM free memory slot count to help avoid aborts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2011-01-31 at 17:18 -0200, Marcelo Tosatti wrote:
> On Mon, Jan 24, 2011 at 10:37:56PM -0700, Alex Williamson wrote:
> > On Mon, 2011-01-24 at 08:44 -0700, Alex Williamson wrote:
> > > I'll look at how we might be
> > > able to allocate slots on demand.  Thanks,
> > 
> > Here's a first cut just to see if this looks agreeable.  This allows the
> > slot array to grow on demand.  This works with current userspace, as
> > well as userspace trivially modified to double KVMState.slots and
> > hotplugging enough pci-assign devices to exceed the previous limit (w/ &
> > w/o ept).  Hopefully I got the rcu bits correct.  Does this look like
> > the right path?  If so, I'll work on removing the fixed limit from
> > userspace next.  Thanks,
> > 
> > Alex
> > 
> > 
> > kvm: Allow memory slot array to grow on demand
> > 
> > Remove fixed KVM_MEMORY_SLOTS limit, allowing the slot array
> > to grow on demand.  Private slots are now allocated at the
> > front instead of the end.  Only x86 seems to use private slots,
> > so this is now zero for all other archs.  The memslots pointer
> > is already updated using rcu, so changing the size off the
> > array when it's replaces is straight forward.  x86 also keeps
> > a bitmap of slots used by a kvm_mmu_page, which requires a
> > shadow tlb flush whenever we increase the number of slots.
> > This forces the pages to be rebuilt with the new bitmap size.
> > 
> > Signed-off-by: Alex Williamson <alex.williamson@xxxxxxxxxx>
> > ---
> > 
> >  arch/ia64/include/asm/kvm_host.h    |    4 --
> >  arch/ia64/kvm/kvm-ia64.c            |    2 +
> >  arch/powerpc/include/asm/kvm_host.h |    3 --
> >  arch/s390/include/asm/kvm_host.h    |    3 --
> >  arch/x86/include/asm/kvm_host.h     |    3 +-
> >  arch/x86/include/asm/vmx.h          |    6 ++-
> >  arch/x86/kvm/mmu.c                  |    7 +++-
> >  arch/x86/kvm/x86.c                  |    6 ++-
> >  include/linux/kvm_host.h            |    7 +++-
> >  virt/kvm/kvm_main.c                 |   65 ++++++++++++++++++++++++-----------
> >  10 files changed, 63 insertions(+), 43 deletions(-)
[snip]
> > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> > index fd67bcd..32f023c 100644
> > --- a/virt/kvm/kvm_main.c
> > +++ b/virt/kvm/kvm_main.c
[snip]
> > @@ -752,12 +753,19 @@ skip_lpage:
> >  
> >  	if (!npages) {
> >  		r = -ENOMEM;
> > -		slots = kzalloc(sizeof(struct kvm_memslots), GFP_KERNEL);
> > +
> > +		nmemslots = (mem->slot >= kvm->memslots->nmemslots) ?
> > +			    mem->slot + 1 : kvm->memslots->nmemslots;
> > +
> > +		slots = kzalloc(sizeof(struct kvm_memslots) +
> > +				nmemslots * sizeof(struct kvm_memory_slot),
> > +				GFP_KERNEL);
> >  		if (!slots)
> >  			goto out_free;
> > -		memcpy(slots, kvm->memslots, sizeof(struct kvm_memslots));
> > -		if (mem->slot >= slots->nmemslots)
> > -			slots->nmemslots = mem->slot + 1;
> > +		memcpy(slots, kvm->memslots,
> > +		       sizeof(struct kvm_memslots) + kvm->memslots->nmemslots *
> > +		       sizeof(struct kvm_memory_slot));
> > +		slots->nmemslots = nmemslots;
> 
> Other than the upper limit, should disallow increasing the number of
> slots in case the new slot is being deleted (npages == 0). So none of
> this additions to !npages case should be necessary.

The existing code currently allows adding a slot with !npages.  I'll
create a small separate patch to make this behavioral change.

> Also, must disallow shrinking the number of slots.

Right, we only grow, never shrink.

> >  		slots->generation++;
> >  		slots->memslots[mem->slot].flags |= KVM_MEMSLOT_INVALID;
> >  
> > @@ -787,12 +795,21 @@ skip_lpage:
> >  	}
> >  
> >  	r = -ENOMEM;
> > -	slots = kzalloc(sizeof(struct kvm_memslots), GFP_KERNEL);
> > +
> > +	if (mem->slot >= kvm->memslots->nmemslots) {
> > +		nmemslots = mem->slot + 1;
> > +		flush = true;
> > +	} else
> > +		nmemslots = kvm->memslots->nmemslots;
> > +
> > +	slots = kzalloc(sizeof(struct kvm_memslots) +
> > +			nmemslots * sizeof(struct kvm_memory_slot),
> > +			GFP_KERNEL);
> >  	if (!slots)
> >  		goto out_free;
> > -	memcpy(slots, kvm->memslots, sizeof(struct kvm_memslots));
> > -	if (mem->slot >= slots->nmemslots)
> > -		slots->nmemslots = mem->slot + 1;
> > +	memcpy(slots, kvm->memslots, sizeof(struct kvm_memslots) +
> > +	       kvm->memslots->nmemslots * sizeof(struct kvm_memory_slot));
> > +	slots->nmemslots = nmemslots;
> >  	slots->generation++;
> >  
> >  	/* actual memory is freed via old in kvm_free_physmem_slot below */
> > @@ -808,6 +825,9 @@ skip_lpage:
> >  	rcu_assign_pointer(kvm->memslots, slots);
> 
> It should be: 
> 
>     spin_lock(kvm->mmu_lock)
>     rcu_assign_pointer()
>     kvm_arch_flush_shadow()
>     spin_unlock(kvm->mmu_lock)
> 
> But kvm_arch_flush_shadow() takes mmu_lock currently, so that needs
> fixing. 

Hmm, I tried to follow the example in the !npages path just above this
that does:

rcu_assign_pointer()
synchronize_srcu_expedited()
kvm_arch_flush_shadow()

Do we have an existing issue there with mmu_lock?  Thanks,

Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux