On Tue, Apr 5, 2011 at 2:48 PM, Peter Zijlstra <a.p.zijlstra@xxxxxxxxx> wrote: > On Tue, 2011-04-05 at 11:56 +0300, Avi Kivity wrote: >> >> Could be waking up due to guest wakeups, or qemu internal wakeups >> (display refresh) or due to guest timer sources which are masked away in >> the guest (if that's the case we should optimize it away). > > Right, so I guess we're all clutching at straws here :-) > > Ben how usable is that system when its in that state? Could you run a > function trace or a trace with all kvm and sched trace-events enabled? I'm just rebuilding the storage on the network to work around an ocfs2 kernel oops (trying nfs/rdma) so I can't test anything just at the moment. I ran some tests under load with the local ext4 ssd, and, weirdly, everything looked to be just fine - the huge bulk of the system time was in svm_vcpu_run, which is as it should be I guess, but that was with only 60 loaded guests. I'll be able to repeat the same workload test tomorrow, and I'll see how the perf top stuff looks. I should also be able to repeat the '96 idle guests' test and see if it's the same - if so we'll do that tracing. My kernel's a moving target at the moment, sorry, we're tracking the natty git (with Eric's rcu patch merged in). Thanks, all, ben -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html