Re: [RFC PATCH 0/2] Expose available KVM free memory slot count to help avoid aborts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2011-01-24 at 15:16 +0100, Jan Kiszka wrote:
> On 2011-01-24 10:32, Marcelo Tosatti wrote:
> > On Fri, Jan 21, 2011 at 04:48:02PM -0700, Alex Williamson wrote:
> >> When doing device assignment, we use cpu_register_physical_memory() to
> >> directly map the qemu mmap of the device resource into the address
> >> space of the guest.  The unadvertised feature of the register physical
> >> memory code path on kvm, at least for this type of mapping, is that it
> >> needs to allocate an index from a small, fixed array of memory slots.
> >> Even better, if it can't get an index, the code aborts deep in the
> >> kvm specific bits, preventing the caller from having a chance to
> >> recover.
> >>
> >> It's really easy to hit this by hot adding too many assigned devices
> >> to a guest (pretty easy to hit with too many devices at instantiation
> >> time too, but the abort is slightly more bearable there).
> >>
> >> I'm assuming it's pretty difficult to make the memory slot array
> >> dynamically sized.  If that's not the case, please let me know as
> >> that would be a much better solution.
> > 
> > Its not difficult to either increase the maximum number (defined as
> > 32 now in both qemu and kernel) of static slots, or support dynamic
> > increases, if it turns out to be a performance issue.
> 
> Static limits are waiting to be hit again, just a bit later.

Yep, and I think static limits large enough that we can't hit them would
be performance prohibitive.

> I would start thinking about a tree search as well because iterating
> over all slots won't get faster over the time.
> 
> > 
> > But you'd probably want to fix the abort for currently supported kernels
> > anyway.
> 
> Jep. Depending on how soon we have smarter solution in the kernel, this
> fix may vary in pragmatism.
> 
> >
> >> I'm not terribly happy with the solution in this series, it doesn't
> >> provide any guarantees whether a cpu_register_physical_memory() will
> >> succeed, only slightly better educated guesses.
> >>
> >> Are there better ideas how we could solve this?  Thanks,
> > 
> > Why can't cpu_register_physical_memory() return an error so you can
> > fallback to slow mode or cancel device insertion?

It appears that it'd be pretty intrusive to fix this since
cpu_register_physical_memory() is a return void, and the kvm hook into
this is via a set_memory callback for the phys memory client.

> Doesn't that registration happens much later than the call to
> pci_register_bar? In any case, this will require significantly more
> invasive work (but it would be much nicer if possible, no question).

Right, we register BARs in the initfn, but we don't map them until the
guest writes the BARs, mapping them into MMIO space.  I don't think we
want to fall back to slow mapping at that point, so we either need to
fail in the initfn (like this series) or be able to dynamically allocate
more slots so the kvm callback can't fail.  I'll look at how we might be
able to allocate slots on demand.  Thanks,

Alex

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux