Re: [PATCH v3] x86/sgx: Synchronize encl->srcu in sgx_encl_release().

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 12, 2021 at 07:35:50PM +0100, Borislav Petkov wrote:
> + paulmck.
> 
> On Tue, Jan 12, 2021 at 02:08:10AM +0200, Jarkko Sakkinen wrote:
> > On Tue, Jan 05, 2021 at 03:57:49PM +0100, Borislav Petkov wrote:
> > > On Wed, Dec 16, 2020 at 03:49:20PM +0200, Jarkko Sakkinen wrote:
> > > > Add synchronize_srcu_expedited() to sgx_encl_release() to catch a grace
> > > > period initiated by sgx_mmu_notifier_release().
> > > > 
> > > > A trivial example of a failing sequence with tasks A and B:
> > > > 
> > > > 1. A: -> sgx_release()
> > > > 2. B: -> sgx_mmu_notifier_release()
> > > > 3. B: -> list_del_rcu()
> > > > 3. A: -> sgx_encl_release()
> > > > 4. A: -> cleanup_srcu_struct()
> > > > 
> > > > The loop in sgx_release() observes an empty list because B has removed its
> > > > entry in the middle, and calls cleanup_srcu_struct() before B has a chance
> > > > to calls synchronize_srcu().
> > > 
> > > Leading to what? NULL ptr?
> > > 
> > > https://lkml.kernel.org/r/X9e2jOWz1hfXVpQ5@xxxxxxxxxx
> > > 
> > > already suggested that you should explain the bug better and add the
> > > splat but I'm still missing that explanation.
> > 
> > OK, I'll try to explain it how I understand the issue.
> > 
> > Consider this loop in the VFS release hook (sgx_release):
> > 
> > 	/*
> > 	 * Drain the remaining mm_list entries. At this point the list contains
> > 	 * entries for processes, which have closed the enclave file but have
> > 	 * not exited yet. The processes, which have exited, are gone from the
> > 	 * list by sgx_mmu_notifier_release().
> > 	 */
> > 	for ( ; ; )  {
> > 		spin_lock(&encl->mm_lock);
> > 
> > 		if (list_empty(&encl->mm_list)) {
> > 			encl_mm = NULL;
> > 		} else {
> > 			encl_mm = list_first_entry(&encl->mm_list,
> > 						   struct sgx_encl_mm, list);
> > 			list_del_rcu(&encl_mm->list);
> > 		}
> > 
> > 		spin_unlock(&encl->mm_lock);
> > 
> > 		/* The enclave is no longer mapped by any mm. */
> > 		if (!encl_mm)
> > 			break;
> > 
> > 		synchronize_srcu(&encl->srcu);
> > 		mmu_notifier_unregister(&encl_mm->mmu_notifier, encl_mm->mm);
> > 		kfree(encl_mm);
> > 	}
> > 
> > 
> > At this point all processes have closed the enclave file, but that doesn't
> > mean that they all have exited yet.
> > 
> > Now, let's imagine that there is exactly one entry in the encl->mm_list.
> > and sgx_release() execution gets scheduled right after returning from
> > synchronize_srcu().
> > 
> > With some bad luck, some process comes and removes that last entry befoe
> > sgx_release() acquires mm_lock. The loop in sgx_release() just leaves
> > 
> > 		/* The enclave is no longer mapped by any mm. */
> > 		if (!encl_mm)
> > 			break;
> > 
> > No synchronize_srcu().
> > 
> > After writing this, I think that the placement for synchronize_srcu()
> > in this patch is not best possible. It should be rather that the
> > above loop would also call synchronize_srcu() when leaving.
> > 
> > I.e. the code change would result:
> > 
> > 	for ( ; ; )  {
> > 		spin_lock(&encl->mm_lock);
> > 
> > 		if (list_empty(&encl->mm_list)) {
> > 			encl_mm = NULL;
> > 		} else {
> > 			encl_mm = list_first_entry(&encl->mm_list,
> > 						   struct sgx_encl_mm, list);
> > 			list_del_rcu(&encl_mm->list);
> > 		}
> > 
> > 		spin_unlock(&encl->mm_lock);
> > 
> >                 /* 
> >                  * synchronize_srcu() is mandatory *even* when the list was
> >                  * empty, in order make sure that grace periods stays in
> >                  * sync even when another task took away the last entry
> >                  * (i.e. exiting process when it deletes its mm_list).
> >                  */
> > 		synchronize_srcu(&encl->srcu);
> > 
> > 		/* The enclave is no longer mapped by any mm. */
> > 		if (!encl_mm)
> > 			break;
> > 
> > 		mmu_notifier_unregister(&encl_mm->mmu_notifier, encl_mm->mm);
> > 		kfree(encl_mm);
> > 	}
> > 
> > What do you think? Does this start to make more sense now?
> > I don't have logs for this but the bug can be also reasoned.
> 
> It does. Now you need to write it up in a detailed form so that it is
> clear to readers months/years from now what exactly can happen. You can
> use a two-column format like
> 
> 	CPU A				CPU B
> 
> Bla
> 					Blu
> 
> This happens now here
> 					But this needs to happen there
> 
> and so on.
> 
> Also, from reading up a bit on this, Documentation/RCU/checklist.rst says
> 
> "Use of the expedited primitives should be restricted to rare
> configuration-change operations that would not normally be undertaken
> while a real-time workload is running."
> 
> so why are you using synchronize_srcu_expedited()? Grepping the tree
> reveals only a couple of call sites only... but I've almost no clue of
> RCU so lemme CC Paul.

The SRCU expedited grace periods are easier on real-time workloads than
synchronize_rcu_expedited(), but the SRCU variant still burns more CPU
time on a given grace period.  But either way, as the document says
further down "However, real-time workloads can use rcupdate.rcu_normal
kernel boot parameter to completely disable expedited grace periods,
though this might have performance implications."

So what are the performance implications in this case?

							Thanx, Paul



[Index of Archives]     [AMD Graphics]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux