Re: [External] Re: [PATCH v6] ext4: improve trim efficiency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jan Kara <jack@xxxxxxx> 于2024年1月9日周二 01:15写道:
>
> On Fri 01-09-23 17:28:20, Fengnan Chang wrote:
> > In commit a015434480dc("ext4: send parallel discards on commit
> > completions"), issue all discard commands in parallel make all
> > bios could merged into one request, so lowlevel drive can issue
> > multi segments in one time which is more efficiency, but commit
> > 55cdd0af2bc5 ("ext4: get discard out of jbd2 commit kthread contex")
> > seems broke this way, let's fix it.
> >
> > In my test:
> > 1. create 10 normal files, each file size is 10G.
> > 2. deallocate file, punch a 16k holes every 32k.
> > 3. trim all fs.
> > the time of fstrim fs reduce from 6.7s to 1.3s.
> >
> > Signed-off-by: Fengnan Chang <changfengnan@xxxxxxxxxxxxx>
>
> This seems to have fallen through the cracks... I'm sorry for that.
>
> >  static int ext4_try_to_trim_range(struct super_block *sb,
> >               struct ext4_buddy *e4b, ext4_grpblk_t start,
> >               ext4_grpblk_t max, ext4_grpblk_t minblocks)
> >  __acquires(ext4_group_lock_ptr(sb, e4b->bd_group))
> >  __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
> >  {
> > -     ext4_grpblk_t next, count, free_count;
> > +     ext4_grpblk_t next, count, free_count, bak;
> >       void *bitmap;
> > +     struct ext4_free_data *entry = NULL, *fd, *nfd;
> > +     struct list_head discard_data_list;
> > +     struct bio *discard_bio = NULL;
> > +     struct blk_plug plug;
> > +     ext4_group_t group = e4b->bd_group;
> > +     struct ext4_free_extent ex;
> > +     bool noalloc = false;
> > +     int ret = 0;
> > +
> > +     INIT_LIST_HEAD(&discard_data_list);
> >
> >       bitmap = e4b->bd_bitmap;
> >       start = max(e4b->bd_info->bb_first_free, start);
> >       count = 0;
> >       free_count = 0;
> >
> > +     blk_start_plug(&plug);
> >       while (start <= max) {
> >               start = mb_find_next_zero_bit(bitmap, max + 1, start);
> >               if (start > max)
> >                       break;
> > +             bak = start;
> >               next = mb_find_next_bit(bitmap, max + 1, start);
> > -
> >               if ((next - start) >= minblocks) {
> > -                     int ret = ext4_trim_extent(sb, start, next - start, e4b);
> > +                     /* when only one segment, there is no need to alloc entry */
> > +                     noalloc = (free_count == 0) && (next >= max);
>
> Is the single extent case really worth the complications to save one
> allocation? I don't think it is but maybe I'm missing something. Otherwise
> the patch looks good to me!
yeah, it's necessary, if there is only one segment, alloc memory may cause
performance regression.
Refer to this https://lore.kernel.org/linux-ext4/CALWNXx-6y0=ZDBMicv2qng9pKHWcpJbCvUm9TaRBwg81WzWkWQ@xxxxxxxxxxxxxx/

Thanks.

>
>                                                                 Honza
>
> >
> > -                     if (ret && ret != -EOPNOTSUPP)
> > +                     trace_ext4_trim_extent(sb, group, start, next - start);
> > +                     ex.fe_start = start;
> > +                     ex.fe_group = group;
> > +                     ex.fe_len = next - start;
> > +                     /*
> > +                      * Mark blocks used, so no one can reuse them while
> > +                      * being trimmed.
> > +                      */
> > +                     mb_mark_used(e4b, &ex);
> > +                     ext4_unlock_group(sb, group);
> > +                     ret = ext4_issue_discard(sb, group, start, next - start, &discard_bio);
> > +                     if (!noalloc) {
> > +                             entry = kmem_cache_alloc(ext4_free_data_cachep,
> > +                                                     GFP_NOFS|__GFP_NOFAIL);
> > +                             entry->efd_start_cluster = start;
> > +                             entry->efd_count = next - start;
> > +                             list_add_tail(&entry->efd_list, &discard_data_list);
> > +                     }
> > +                     ext4_lock_group(sb, group);
> > +                     if (ret < 0)
> >                               break;
> >                       count += next - start;
> >               }
> > @@ -6959,6 +6950,22 @@ __releases(ext4_group_lock_ptr(sb, e4b->bd_group))
> >                       break;
> >       }
> >
> > +     if (discard_bio) {
> > +             ext4_unlock_group(sb, e4b->bd_group);
> > +             submit_bio_wait(discard_bio);
> > +             bio_put(discard_bio);
> > +             ext4_lock_group(sb, e4b->bd_group);
> > +     }
> > +     blk_finish_plug(&plug);
> > +
> > +     if (noalloc && free_count)
> > +             mb_free_blocks(NULL, e4b, bak, free_count);
> > +
> > +     list_for_each_entry_safe(fd, nfd, &discard_data_list, efd_list) {
> > +             mb_free_blocks(NULL, e4b, fd->efd_start_cluster, fd->efd_count);
> > +             kmem_cache_free(ext4_free_data_cachep, fd);
> > +     }
> > +
> >       return count;
> >  }
> >
> > --
> > 2.20.1
> >
> --
> Jan Kara <jack@xxxxxxxx>
> SUSE Labs, CR





[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux