2018-01-25 22:53 GMT+08:00 Paolo Bonzini <pbonzini@xxxxxxxxxx>: > > > ----- Original Message ----- >> From: "Wanpeng Li" <kernellwp@xxxxxxxxx> >> To: "Gal Hammer" <ghammer@xxxxxxxxxx> >> Cc: "kvm" <kvm@xxxxxxxxxxxxxxx>, "Radim Krcmar" <rkrcmar@xxxxxxxxxx>, "Paolo Bonzini" <pbonzini@xxxxxxxxxx> >> Sent: Thursday, January 25, 2018 8:48:14 AM >> Subject: Re: [PATCH] kvm: use insert sort in kvm_io_bus_register_dev function >> >> 2018-01-16 21:34 GMT+08:00 Gal Hammer <ghammer@xxxxxxxxxx>: >> > The loading time of a VM is quite significant with a CPU usage >> > reaching 100% when loading a VM that its virtio devices use a >> > large amount of virt-queues (e.g. a virtio-serial device with >> > max_ports=511). Most of the time is spend in re-sorting the >> > kvm_io_bus kvm_io_range array when a new eventfd is registered. >> > >> > The patch replaces the existing method with an insert sort. >> >> If we should stop to use bsearch when searching io bus later? > > There is an important difference between the two cases. In Gal's > case, sort() is always O(n log n) and so you get O(n^2 log n) vs > O(n^2) for insertion sort. Also, heap sort is pretty expensive, > so there is a lot of overhead even before you take into account > indirect calls. > > For bsearch(), switching to an open-coded binary search would keep > the same complexity O(log n), only saving the indirect calls. It > may save a few clock cycles, but it's probably even more effective > to keep a simple one-element cache for the most recently used item, > and then fall back to bsearch(). Thanks for the elaboration. Regards, Wanpeng Li