On Sun, Apr 22, 2012 at 2:16 AM, Avi Kivity <avi@xxxxxxxxxx> wrote: > On 04/21/2012 05:15 AM, Mike Waychison wrote: [...] > There is no mmu_list_lock. Do you mean kvm_lock or kvm->mmu_lock? > > If the former, then we could easily fix this by dropping kvm_lock while > the work is being done. If the latter, then it's more difficult. > > (kvm_lock being contended implies that mmu_shrink is called concurrently?) On a 32-core system experiencing memory pressure, mmu_shrink was often being called concurrently (before we turned it off). With just one, or a small number of VMs on a host, when the mmu_shrinker contents on the kvm_lock, that's just a proxy for the contention on kvm->mmu_lock. It is the one that gets reported, though, since it gets acquired first. The contention on mmu_lock would indeed be difficult to remove. Our case was perhaps unusual, because of the use of memory containers. So some cgroups were under memory pressure (thus calling the shrinker) but the various VCPU threads (whose guest page tables were being evicted by the shrinker) could immediately turn around and successfully re-allocate them. That made the kvm->mmu_lock really hot. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href