On 02/10/2011 04:34 PM, Jan Kiszka wrote:
On 2011-02-10 15:26, Avi Kivity wrote: > On 02/10/2011 03:47 PM, Jan Kiszka wrote: >>>> >>>> Accept for mmu_shrink, which is write but not delete, thus works without >>>> that slow synchronize_rcu. >>> >>> I don't really see how you can implement list_move_rcu(), it has to be >>> atomic or other users will see a partial vm_list. >> >> Right, even if we synchronized that step cleanly, rcu-protected users >> could miss the moving vm during concurrent list walks. >> >> What about using a separate mutex for protecting vm_list instead? >> Unless I missed some detail, mmu_shrink should allow blocking. > > What else does kvm_lock protect? Someone tried to write a locking.txt and stated that it's also protecting enabling/disabling hardware virtualization. But that guy may have overlooked something.
Right. I guess splitting that lock makes sense.
> > I think we could simply reduce the amount of time we hold kvm_lock. > Pick a vm, ref it, list_move_tail(), unlock, then do the actual > shrinking. Of course taking a ref must be done carefully, we might > already be in kvm_destroy_vm() at that time. > Plain mutex held across the whole mmu_shrink loop is still simpler and should be sufficient - unless we also have to deal with scalability issues if that handler is able to run concurrently. But based on how we were using kvm_lock so far...
I don't think a mutex would work for kvmclock_cpufreq_notifier(). At the very least, we'd need a preempt_disable() there. At the worst, the notifier won't like sleeping.
-- error compiling committee.c: too many arguments to function -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html