On 14/11/2014 15:10, Igor Mammedov wrote: > On Thu, 06 Nov 2014 17:23:58 +0100 Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote: >> It would use more memory, and some loops are now becoming more >> expensive. In general adding a memory slot to a VM is not cheap, and >> I question the wisdom of having 256 hotplug memory slots. But the >> slowdown mostly would only happen if you actually _use_ those memory >> slots, so it is not a blocker for this patch. > It might be useful to have a big amount of slots for big guests > and although linux works with minimum section 128Mb but Windows memory > hotplug works just fine even with page-sized slots so when unplug in > QEMU is implemented it would be possible to drop balooning driver at > least there. I think for a big (64G?) guest it doesn't make much sense anyway to balloon at a granularity that is less than 1G or even more. So I like the idea of dropping ballooning in favor of memory hotplug for big guests. > And providing that memslots could be allocated during runtime when guest > programs devices or maps roms (i.e. no fail path), I don't see a way > to fix it in QEMU (i.e. avoid abort when limit is reached). > Hence an attempt to bump memslots limit to 512, where current 125 > are reserved for initial memory mappings and passthrough devices > 256 goes to hotplug memory slots and leaves us 128 free slots for > future expansion. > > To see what would be affected by large amount of slots I played with > perf a bit and the biggest hotspot offender with large amount of > memslots was: > > gfn_to_memslot() -> ... -> search_memslots() > > I'll try to make it faster for this case so 512 memslots wouldn't > affect guest performance. > > So please consider applying this patch. Yes, sorry for the delay---I am definitely going to apply it. Paolo -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html