On Mon, Jan 6, 2020 at 2:22 AM Guoqing Jiang <guoqing.jiang@xxxxxxxxxxxxxxx> wrote: > > > > On 1/6/20 10:56 AM, Guoqing Jiang wrote: > > > > > > On 1/4/20 12:15 AM, Song Liu wrote: > >> On Mon, Dec 23, 2019 at 1:49 AM <jgq516@xxxxxxxxx> wrote: > >>> From: Guoqing Jiang <guoqing.jiang@xxxxxxxxxxxxxxx> > >>> > >>> Previously, we call check_and_add_serial when serialization is > >>> enabled for write IO, but it could allocate and free memory > >>> back and forth. > >>> > >>> Now, let's just get an element from memory pool with the new > >>> function, then insert node to rb tree if no collision happens. > >>> > >>> Signed-off-by: Guoqing Jiang <guoqing.jiang@xxxxxxxxxxxxxxx> > >>> --- > >>> drivers/md/raid1.c | 41 ++++++++++++++++++++++------------------- > >>> 1 file changed, 22 insertions(+), 19 deletions(-) > >>> > >>> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c > >>> index 48d553d7989a..cd810e195086 100644 > >>> --- a/drivers/md/raid1.c > >>> +++ b/drivers/md/raid1.c > >>> @@ -56,32 +56,43 @@ static void lower_barrier(struct r1conf *conf, > >>> sector_t sector_nr); > >>> INTERVAL_TREE_DEFINE(struct serial_info, node, sector_t, > >>> _subtree_last, > >>> START, LAST, static inline, raid1_rb); > >>> > >>> -static int check_and_add_serial(struct md_rdev *rdev, sector_t lo, > >>> sector_t hi) > >>> +static int check_and_add_serial(struct md_rdev *rdev, struct r1bio > >>> *r1_bio, > >>> + struct serial_info *si, int idx) > >>> { > >>> - struct serial_info *si; > >>> unsigned long flags; > >>> int ret = 0; > >>> - struct mddev *mddev = rdev->mddev; > >>> - int idx = sector_to_idx(lo); > >>> + sector_t lo = r1_bio->sector; > >>> + sector_t hi = lo + r1_bio->sectors; > >>> struct serial_in_rdev *serial = &rdev->serial[idx]; > >>> > >>> - si = mempool_alloc(mddev->serial_info_pool, GFP_NOIO); > >>> - > >>> spin_lock_irqsave(&serial->serial_lock, flags); > >>> /* collision happened */ > >>> if (raid1_rb_iter_first(&serial->serial_rb, lo, hi)) > >>> ret = -EBUSY; > >>> - if (!ret) { > >>> + else { > >>> si->start = lo; > >>> si->last = hi; > >>> raid1_rb_insert(si, &serial->serial_rb); > >>> - } else > >>> - mempool_free(si, mddev->serial_info_pool); > >>> + } > >>> spin_unlock_irqrestore(&serial->serial_lock, flags); > >>> > >>> return ret; > >>> } > >>> > >>> +static void wait_for_serialization(struct md_rdev *rdev, struct > >>> r1bio *r1_bio) > >>> +{ > >>> + struct mddev *mddev = rdev->mddev; > >>> + struct serial_info *si; > >>> + int idx = sector_to_idx(r1_bio->sector); > >>> + struct serial_in_rdev *serial = &rdev->serial[idx]; > >>> + > >>> + if (WARN_ON(!mddev->serial_info_pool)) > >>> + return; > >>> + si = mempool_alloc(mddev->serial_info_pool, GFP_NOIO); > >>> + wait_event(serial->serial_io_wait, > >>> + check_and_add_serial(rdev, r1_bio, si, idx) == 0); > >> Are we leaking si when raid1_rb_iter_first() failed? > > > > Now, 'si' is only allocated once before wait_event, if check_and_add_serial > returns -EBUSY, then Ah, I missed the wait part. Applied the series to md-next. Thanks, Song