Re: [PATCH] md: fix a crash in mempool_free

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 05 Nov 2022, Mikulas Patocka wrote:
> 
> On Fri, 4 Nov 2022, NeilBrown wrote:
> 
> > > ---
> > >  drivers/md/md.c |    6 ++++--
> > >  1 file changed, 4 insertions(+), 2 deletions(-)
> > > 
> > > Index: linux-2.6/drivers/md/md.c
> > > ===================================================================
> > > --- linux-2.6.orig/drivers/md/md.c	2022-11-03 15:29:02.000000000 +0100
> > > +++ linux-2.6/drivers/md/md.c	2022-11-03 15:33:17.000000000 +0100
> > > @@ -509,13 +509,14 @@ static void md_end_flush(struct bio *bio
> > >  	struct md_rdev *rdev = bio->bi_private;
> > >  	struct mddev *mddev = rdev->mddev;
> > >  
> > > +	bio_put(bio);
> > > +
> > >  	rdev_dec_pending(rdev, mddev);
> > >  
> > >  	if (atomic_dec_and_test(&mddev->flush_pending)) {
> > >  		/* The pre-request flush has finished */
> > >  		queue_work(md_wq, &mddev->flush_work);
> > >  	}
> > > -	bio_put(bio);
> > >  }
> > >  
> > >  static void md_submit_flush_data(struct work_struct *ws);
> > > @@ -913,10 +914,11 @@ static void super_written(struct bio *bi
> > >  	} else
> > >  		clear_bit(LastDev, &rdev->flags);
> > >  
> > > +	bio_put(bio);
> > > +
> > >  	if (atomic_dec_and_test(&mddev->pending_writes))
> > >  		wake_up(&mddev->sb_wait);
> > >  	rdev_dec_pending(rdev, mddev);
> > > -	bio_put(bio);
> > >  }
> > 
> > Thanks. I think this is a clear improvement.
> > I think it would be a little better if the rdev_dec_pending were also
> > move up.
> > Then both code fragments would be:
> >   bio_put ; rdev_dec_pending ; atomic_dec_and_test
> > 
> > Thanks,
> > NeilBrown
> 
> Yes, I'll send a second patch that moves rdev_dec_pending up too.
> 
> BTW. even this is theoretically incorrect:
> 
> > >     if (atomic_dec_and_test(&mddev->pending_writes))
> > >             wake_up(&mddev->sb_wait);
> 
> Suppose that you execute atomic_dec_and_test and then there's a context 
> switch to a process that destroys the md device and then there's a context 
> switch back and you call "wake_up(&mddev->sb_wait)" on freed memory.
> 
> I think that we should use wait_var_event/wake_up_var instead of sb_wait. 
> That will use preallocated hashed wait queues.
> 

I agree there is a potential problem.  Using wait_var_event is an
approach that could work.
An alternate would be to change that code to

  if (atomic_dec_and_lock(&mddev->pending_writes, &mddev->lock)) {
           wake_up(&mddev->sb_wait);
           spin_unlock(&mddev->lock);
  }

As __md_stop() takes mddev->lock, it would not be able to get to the
'free' until after the lock was dropped.

Thanks,
NeilBrown

--
dm-devel mailing list
dm-devel@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/dm-devel




[Index of Archives]     [DM Crypt]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite Discussion]     [KDE Users]     [Fedora Docs]

  Powered by Linux