On Mon, Mar 21, 2011 at 10:57 PM, Eric Dumazet <eric.dumazet@xxxxxxxxx> wrote: > Le lundi 21 mars 2011 à 19:02 +0200, Avi Kivity a écrit : > >> Any ideas on how to fix it? We could pre-allocate IDs and batch them in >> per-cpu caches, but it seems like a lot of work. >> > > Hmm, I dont know what syscalls kvm do, but even a timer_gettime() has to > lock this idr_lock. > > Sounds crazy, and should be fixed in kernel code, not kvm ;) Hi, I'll need to work out a way I can make the perf.data available (~100M), but here's the summary http://paste.ubuntu.com/583425/ And here's the summary of the summary # Overhead Command Shared Object Symbol # ........ ............... .................... .......................................... # 71.86% kvm [kernel.kallsyms] [k] __ticket_spin_lock | --- __ticket_spin_lock | |--54.19%-- default_spin_lock_flags | _raw_spin_lock_irqsave | | | --54.14%-- __lock_timer | | | |--31.92%-- sys_timer_gettime | | system_call_fastpath | | | --22.22%-- sys_timer_settime | system_call_fastpath | |--15.66%-- _raw_spin_lock [...] Which I guess seems to match what Eric just said. I'll post a link to the full data tomorrow. Many thanks for the help so far. If it's a kernel scaling limit then I guess we just wait until the kernel gets better. :S I'll check it out with a linux guest tomorrow anyway. Cheers, ben -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html