Hi Ben, On Fri, Mar 18, 2011 at 07:02:40PM +0700, Ben Nagy wrote: > Here's some output from perf top while the system is locky: > 263832.00 46.3% delay_tsc > [kernel.kallsyms] > 231491.00 40.7% __ticket_spin_trylock > [kernel.kallsyms] > 14609.00 2.6% native_read_tsc > [kernel.kallsyms] > 9414.00 1.7% do_raw_spin_lock > [kernel.kallsyms] > 8041.00 1.4% local_clock > [kernel.kallsyms] > 6081.00 1.1% native_safe_halt > [kernel.kallsyms] > 3901.00 0.7% __lock_acquire.clone.18 > [kernel.kallsyms] > 3665.00 0.6% do_raw_spin_unlock > [kernel.kallsyms] > 3042.00 0.5% __delay > [kernel.kallsyms] > 2484.00 0.4% lock_contended > [kernel.kallsyms] > 2484.00 0.4% sched_clock_cpu > [kernel.kallsyms] > 1906.00 0.3% sched_clock_local > [kernel.kallsyms] > 1419.00 0.2% lock_acquire > [kernel.kallsyms] > 1332.00 0.2% lock_release > [kernel.kallsyms] > 987.00 0.2% tg_load_down > [kernel.kallsyms] > 895.00 0.2% _raw_spin_lock_irqsave > [kernel.kallsyms] > 686.00 0.1% find_busiest_group > [kernel.kallsyms] Can you try to run # perf record -a -g for a while when your VMs are up and unresponsive? This will monitor the whole system and collects stack-traces. When you have done so please run # perf report > locks.txt and upload the locks.txt file somewhere. The result might give us some glue where the high lock-contention comes from. Regards, Joerg -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html