On Sat, Apr 04, 2020 at 04:12:02AM +0300, Jarkko Sakkinen wrote: > On Fri, Apr 03, 2020 at 04:42:39PM -0700, Sean Christopherson wrote: > > On Fri, Apr 03, 2020 at 12:35:50PM +0300, Jarkko Sakkinen wrote: > > > From: Sean Christopherson <sean.j.christopherson@xxxxxxxxx> > > > @@ -221,12 +224,16 @@ int sgx_encl_mm_add(struct sgx_encl *encl, struct mm_struct *mm) > > > return ret; > > > } > > > > > > + /* > > > + * The page reclaimer uses list version for synchronization instead of > > > + * synchronize_scru() because otherwise we could conflict with > > > + * dup_mmap(). > > > + */ > > > spin_lock(&encl->mm_lock); > > > list_add_rcu(&encl_mm->list, &encl->mm_list); > > > > You dropped the smp_wmb(). > > As I said to you in my review x86 pipeline does not reorder writes. And as I pointed out in this thread, smp_wmb() is a _compiler_ barrier if and only if CONFIG_SMP=y. The compiler can reorder list_add_rcu() and mm_list_version++ because from it's perspective there is no dependency between the two. And that's entirely true except for the SMP case where the consumer of mm_list_version is relying on the list to be updated before the version changes.