Re: Writeback cache all used.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> 2023年4月2日 08:01,Eric Wheeler <bcache@xxxxxxxxxxxxxxxxxx> 写道:
> 
> On Fri, 31 Mar 2023, Adriano Silva wrote:
>> Thank you very much!
>> 
>>> I don't know for sure, but I'd think that since 91% of the cache is
>>> evictable, writing would just evict some data from the cache (without
>>> writing to the HDD, since it's not dirty data) and write to that area of
>>> the cache, *not* to the HDD. It wouldn't make sense in many cases to
>>> actually remove data from the cache, because then any reads of that data
>>> would have to read from the HDD; leaving it in the cache has very little
>>> cost and would speed up any reads of that data.
>> 
>> Maybe you're right, it seems to be writing to the cache, despite it 
>> indicating that the cache is at 100% full.
>> 
>> I noticed that it has excellent reading performance, but the writing 
>> performance dropped a lot when the cache was full. It's still a higher 
>> performance than the HDD, but much lower than it is when it's half full 
>> or empty.
>> 
>> Sequential writing tests with "_fio" now show me 240MB/s of writing, 
>> which was already 900MB/s when the cache was still half full. Write 
>> latency has also increased. IOPS on random 4K writes are now in the 5K 
>> range. It was 16K with half used cache. At random 4K with Ioping, 
>> latency went up. With half cache it was 500us. It is now 945us.
>> 
>> For reading, nothing has changed.
>> 
>> However, for systems where writing time is critical, it makes a 
>> significant difference. If possible I would like to always keep it with 
>> a reasonable amount of empty space, to improve writing responses. Reduce 
>> 4K latency, mostly. Even if it were for me to program a script in 
>> crontab or something like that, so that during the night or something 
>> like that the system executes a command for it to clear a percentage of 
>> the cache (about 30% for example) that has been unused for the longest 
>> time . This would possibly make the cache more efficient on writes as 
>> well.
> 
> That is an intersting idea since it saves latency. Keeping a few unused 
> ready to go would prevent GC during a cached write. 
> 

Currently there are around 10% reserved already, if dirty data exceeds the threshold further writing will go into backing device directly.

Reserve more space doesn’t change too much, if there are always busy write request arriving. For occupied clean cache space, I tested years ago, the space can be shrunk very fast and it won’t be a performance bottleneck. If the situation changes now, please inform me.

> Coly, would this be an easy feature to add?
> 

To make it, the change won’t be complexed. But I don’t feel it may solve the original writing performance issue when space is almost full. In the code we already have similar lists to hold available buckets for future data/metadata allocation. But if the lists are empty, there is still time required to do the dirty writeback and garbage collection if necessary.

> Bcache would need a `cache_min_free` tunable that would (asynchronously) 
> free the least recently used buckets that are not dirty.
> 

For clean cache space, it has been already. This is very fast to shrink clean cache space, I did a test 2 years ago, it was just not more than 10 seconds to reclaim around 1TB+ clean cache space. I guess the time might be much less, because reading the information from priorities file also takes time.



Coly Li






[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux ARM Kernel]     [Linux Filesystem Development]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux