Re: [PATCH 0/4] Improve group scanning in CR 0 and CR 1 passes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Jan 29, 2021, at 3:29 PM, Harshad Shirwadkar <harshadshirwadkar@xxxxxxxxx> wrote:
> This patch series improves cr 0 and cr 1 passes of the allocator
> signficantly. Currently, at cr 0 and 1, we perform linear lookups to
> find the matching groups. That's very inefficient for large file
> systems where there are millions of block groups. At cr 0, we only
> care about the groups that have the largest free order >= the
> request's order and at cr 1 we only care about groups where average
> fragment size > the request size. so, this patch introduces new data
> structure that allow us to perform cr 0 lookup in constant time and cr
> 1 lookup in log (number of groups) time instead of linear.

I've added Alex and Artem to the CC list, since they've both been working
on this code recently and hopefully they can review these patches.  Also,
I've added Shuichi in the hopes that he can give this a test on a very
large filesystem to see what kind of improvement it shows.

Cheers, Andreas

> For cr 0, we add a list for each order and all the groups are enqueued
> to the appropriate list. This allows us to lookup a match at cr 0 in
> constant time.
> 
> For cr 1, we add a new rb tree of groups sorted by largest fragment
> size. This allows us to lookup a match for cr 1 in log (num groups)
> time.
> 
> Verified that there are no regressions in smoke tests (-g quick -c 4k).
> 
> Also, to demonstrate the effectiveness for the patch series, following
> experiment was performed:
> 
> Created a highly fragmented disk of size 65TB. The disk had no
> contiguous 2M regions. Following command was run consecutively for 3
> times:
> 
> time dd if=/dev/urandom of=file bs=2M count=10
> 
> Here are the results with and without cr 0/1 optimizations:
> 
> |---------+------------------------------+---------------------------|
> |         | Without CR 0/1 Optimizations | With CR 0/1 Optimizations |
> |---------+------------------------------+---------------------------|
> | 1st run | 5m1.871s                     | 2m47.642s                 |
> | 2nd run | 2m28.390s                    | 0m0.611s                  |
> | 3rd run | 2m26.530s                    | 0m1.255s                  |
> |---------+------------------------------+---------------------------|
> 
> Signed-off-by: Harshad Shirwadkar <harshadshirwadkar@xxxxxxxxx>
> 
> Harshad Shirwadkar (4):
>  ext4: add MB_NUM_ORDERS macro
>  ext4: drop s_mb_bal_lock and convert protected fields to atomic
>  ext4: improve cr 0 / cr 1 group scanning
>  ext4: add proc files to monitor new structures
> 
> fs/ext4/ext4.h    |  12 +-
> fs/ext4/mballoc.c | 330 ++++++++++++++++++++++++++++++++++++++++++----
> fs/ext4/mballoc.h |   6 +
> fs/ext4/sysfs.c   |   2 +
> 4 files changed, 324 insertions(+), 26 deletions(-)
> 
> --
> 2.30.0.365.g02bc693789-goog
> 


Cheers, Andreas





Attachment: signature.asc
Description: Message signed with OpenPGP


[Index of Archives]     [Reiser Filesystem Development]     [Ceph FS]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite National Park]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Device Mapper]     [Linux Media]

  Powered by Linux