On Tue, 2009-07-07 at 15:37 -0300, Marcelo Tosatti wrote: > >>> > >>> Is there any way around this other than completly shutting down lockdep? > >>> > >> > >> When we created this the promise was that kvm would only do this on a > >> fresh mm with only a few vmas, has that changed > > > > The number of vmas did increase, but not materially. We do link with > > more shared libraries though. > > Yeah, see attached /proc/pid/maps just before the ioctl thats ends up in > mmu_notifier_register. > > mm_take_all_locks: file_vma=79 anon_vma=40 Another issue, at about >=256 vmas we'll overflow the preempt count. So disabling lockdep will only 'fix' this for a short while, until you've bloated beyond that ;-) Although you could possibly disable preemption and use __raw_spin_lock(), that would also side-step the whole lockdep issue, but it feels like such a horrid hack. Alternatively we would have to modify the rmap locking, but that would incur overhead on the regular code paths, so that's probably not worth the trade-off. Linus, Ingo, any opinions? -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html