Re: [PATCH v3] bcache: dynamic incremental gc

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




在 2022/5/21 02:24, Eric Wheeler 写道:
On Fri, 20 May 2022, Zou Mingzhe wrote:
在 2022/5/12 21:41, Coly Li 写道:
On 5/11/22 3:39 PM, mingzhe.zou@xxxxxxxxxxxx wrote:
Hi Mingzhe,

 From the first glance, I feel this change may delay the small GC period, and
finally result a large GC period, which is not expected.

But it is possible that my feeling is incorrect. Do you have detailed
performance number about both I/O latency  and GC period, then I can have
more understanding for this effort.

BTW, I will add this patch to my testing set and experience myself.


Thanks.


Coly Li


Hi Coly,

First, your feeling is right. Then, I have some performance number abort
before and after this patch.
Since the mailing list does not accept attachments, I put them on the gist.

Please visit the page for details:
“https://gist.github.com/zoumingzhe/69a353e7c6fffe43142c2f42b94a67b5”;
mingzhe
The graphs certainly show that peak latency is much lower, that is
improvement, and dmesg shows the avail_nbuckets stays about the same so GC
is keeping up.

Questions:

1. Why is the after-"BW NO GC" graph so much flatter than the before-"BW
    NO GC" graph?  I would expect your control measurements to be about the
    same before vs after.  You might `blkdiscard` the cachedev and
    re-format between runs in case the FTL is getting in the way, or maybe
    something in the patch is affecting the "NO GC" graphs.
Hi Eric,

First, I re-format the disk with make-bcache before each fio. Then, the graph after "BW NO GC" is much flatter than the graph before "BW NO GC", I think you may have seen another patch (bcache: allow allocator invalidate bucket in gc) I pushed . I also noticed a drop in iops of at least 20%, but we added a lot of patches between before and after, so it will take some time to figure out which patch is causing it.

2. I wonder how the latency looks if you zoom into to the latency graph:
    If you truncate the before-"LATENCY DO GC" graph at 3000 us then how
    does the average latency look between the two?
I will test the performance numbers for each patch one by one, and provide more detailed graph and number later.

mingzhe

3. This may be solved if you can fix the control graph issue in #1, but
    the before vs after of "BW DO GC" shows about a 30% decrease in
    bandwidth performance outside of the GC spikes.  "IOPS DO GC" is lower
    with the patch too.  Do you think that your dynamic incremental gc
    algorithm be tuned to deal with GC latency and still provide nearly the
    same IOPS and bandwidth as before?


--
Eric Wheeler




[snipped]






[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux ARM Kernel]     [Linux Filesystem Development]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux