Re: Best practices for OSD on bcache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 02, 2021 at 05:47:29PM +0800, Norman.Kern wrote:
> Matthias, 
> 
> I agreed with you for tuning. I  ask this question just for that my OSDs have problems when the
> 
> cache_available_percent less than 30, the SSDs almost useless and all I/Os bypass to HDDs with large latency.


Hi,

I used to tune writeback_percent as far down as 1. I guess rapid
writeback helped keep complexity (CPU, additional I/O) of handling dirty
blocks low. Hoarding dirty data for a better chance to eventually turn
it into sequential I/O is an important gain on MD-RAID5/6, but not so
much on a single disk.

Perhaps at cache_available_percent < 30 bcache needs to do some garbage
collection. This would at least give some CPU spike, and probably some
additional I/O spike for the on-disk data structure.

This is where you really want to have decent DC-grade caching devices
that can keep up with this sort of write amplification spikes.
Consumer-grade devices won't be able to, and even add their own very
much noticeable internal garbage collection on top.

Bypassing the caching SSD device on (non-sequential) I/O is usually a
symptom of bcache detecting a saturated caching device, i.e. the SSDs
are probably not DC-grade. At this point you get all the latency of the
backing HDD, plus some more from metadata handling. You might even tune
bcache to never bypass, but at this point, it would only further
add to the latency.


Regards
Matthias
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux