On Fri, May 20, 2022, Paolo Bonzini wrote: > On 4/27/22 03:40, Sean Christopherson wrote: > > + * Wait for mn_active_invalidate_count, not mmu_notifier_count, > > + * to go away, as the invalidation in the mmu_notifier event > > + * occurs_before_ mmu_notifier_count is elevated. > > + * > > + * Note, mn_active_invalidate_count can change at any time as > > + * it's not protected by gpc->lock. But, it is guaranteed to > > + * be elevated before the mmu_notifier acquires gpc->lock, and > > + * isn't dropped until after mmu_notifier_seq is updated. So, > > + * this task may get a false positive of sorts, i.e. see an > > + * elevated count and wait even though it's technically safe to > > + * proceed (becase the mmu_notifier will invalidate the cache > > + *_after_ it's refreshed here), but the cache will never be > > + * refreshed with stale data, i.e. won't get false negatives. > > I am all for lavish comments, but I think this is even too detailed. Yeah, the false positive/negative stuff is probably overkill. > What about: > > /* > * mn_active_invalidate_count acts for all intents and purposes > * like mmu_notifier_count here; but we cannot use the latter > * because the invalidation in the mmu_notifier event occurs > * _before_ mmu_notifier_count is elevated. Looks good, though I'd prefer to avoid the "we", and explicitly call out that its the invalidation of the caches. /* * mn_active_invalidate_count acts for all intents and purposes * like mmu_notifier_count here; but the latter cannot be used * here because the invalidation of caches in the mmu_notifier * event occurs _before_ mmu_notifier_count is elevated. * * Note, it does not matter that mn_active_invalidate_count * is not protected by gpc->lock. It is guaranteed to * be elevated before the mmu_notifier acquires gpc->lock, and * isn't dropped until after mmu_notifier_seq is updated. */ Also, you'll definitely want to look at v3 of this series. I'm 99% certain I didn't change the comment though :-) https://lore.kernel.org/all/20220429210025.3293691-1-seanjc@xxxxxxxxxx