Re: [PATCH] locks: fix a potential use-after-free problem when wakeup a waiter

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, 2020-03-04 at 07:26 -0500, Jeff Layton wrote:
> On Wed, 2020-03-04 at 15:25 +0800, yangerkun wrote:
> > '16306a61d3b7 ("fs/locks: always delete_block after waiting.")' add the
> > logic to check waiter->fl_blocker without blocked_lock_lock. And it will
> > trigger a UAF when we try to wakeup some waiter:
> > 
> > Thread 1 has create a write flock a on file, and now thread 2 try to
> > unlock and delete flock a, thread 3 try to add flock b on the same file.
> > 
> > Thread2                         Thread3
> >                                 flock syscall(create flock b)
> > 	                        ...flock_lock_inode_wait
> > 				    flock_lock_inode(will insert
> > 				    our fl_blocked_member list
> > 				    to flock a's fl_blocked_requests)
> > 				   sleep
> > flock syscall(unlock)
> > ...flock_lock_inode_wait
> >     locks_delete_lock_ctx
> >     ...__locks_wake_up_blocks
> >         __locks_delete_blocks(
> > 	b->fl_blocker = NULL)
> > 	...
> >                                    break by a signal
> > 				   locks_delete_block
> > 				    b->fl_blocker == NULL &&
> > 				    list_empty(&b->fl_blocked_requests)
> > 	                            success, return directly
> > 				 locks_free_lock b
> > 	wake_up(&b->fl_waiter)
> > 	trigger UAF
> > 
> > Fix it by remove this logic, and this patch may also fix CVE-2019-19769.
> > 
> > Fixes: 16306a61d3b7 ("fs/locks: always delete_block after waiting.")
> > Signed-off-by: yangerkun <yangerkun@xxxxxxxxxx>
> > ---
> >  fs/locks.c | 14 --------------
> >  1 file changed, 14 deletions(-)
> > 
> > diff --git a/fs/locks.c b/fs/locks.c
> > index 44b6da032842..426b55d333d5 100644
> > --- a/fs/locks.c
> > +++ b/fs/locks.c
> > @@ -753,20 +753,6 @@ int locks_delete_block(struct file_lock *waiter)
> >  {
> >  	int status = -ENOENT;
> >  
> > -	/*
> > -	 * If fl_blocker is NULL, it won't be set again as this thread
> > -	 * "owns" the lock and is the only one that might try to claim
> > -	 * the lock.  So it is safe to test fl_blocker locklessly.
> > -	 * Also if fl_blocker is NULL, this waiter is not listed on
> > -	 * fl_blocked_requests for some lock, so no other request can
> > -	 * be added to the list of fl_blocked_requests for this
> > -	 * request.  So if fl_blocker is NULL, it is safe to
> > -	 * locklessly check if fl_blocked_requests is empty.  If both
> > -	 * of these checks succeed, there is no need to take the lock.
> > -	 */
> > -	if (waiter->fl_blocker == NULL &&
> > -	    list_empty(&waiter->fl_blocked_requests))
> > -		return status;
> >  	spin_lock(&blocked_lock_lock);
> >  	if (waiter->fl_blocker)
> >  		status = 0;
> 
> Well spotted, but is this sufficient to fix the issue?
> 
> If Thread2 gets scheduled off before the wake_up but after removing the
> block, then it seems like it could hit the same problem regardless of
> whether you took the spinlock or not in that codepath.
> 
> The core problem seems to be that we don't have any guarantee that
> waiter "b" will still be around once the spinlock has been dropped in
> the unlock codepath.
> 

Nevermind. I think your patch is correct now that I've looked again. 

Thread2 is still holding the blocked_lock_lock, and that should be
enough to prevent the block from being freed out from under it. Since we
have to take the blocked_lock_lock in this codepath, that ensures that
Thread2 is either able to safely issue the wake_up, of that it won't
find the block on the list.

I'll go ahead and put this in linux-next for now, and will plan to get
this to Linus before v5.6 ships.

Thanks!
-- 
Jeff Layton <jlayton@xxxxxxxxxx>




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux