Re: DM-Cache for spinning OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 5/17/22 08:51, Stolte, Felix wrote:
Hey guys,

i have three servers with 12x 12 TB Sata HDDs and 1x 3,4 TB NVME. I am thinking of putting DB/WAL on the NVMe as well as an 5GB DM-Cache for each spinning disk. Is anyone running something like this in a production environment?


We have some servers with a similar hardware setup. Instead of using separate partitions/lvms for DBs, we used the whole nvme as cache device.  OSD were set up without extra db devices.

As caching layer we have used bcache instead of dm-cache. There are some performance benchmarks in the net indicating that bcache has a better performance in many use cases. And you can share a cache device between several data devices out of the box.

Speaking of user cases: depending on your use case (e.g. rbd only), there won't be much metadata in the db, so the overall db size will be rather small. Using 30 GB for each partition is a waste of capacity in this case.

Using a whole-disk approach will also use the case for data read and writes, resulting in an overall performance improvement in most case. But YMMV.


In newer installations we do not use bcache anymore. The operational complexity is too high; replacing disks requires some extra steps which made the whole process fragile. It is also not supported by the standard deployment tools. dm-cache might be a better solution operation-wise.


Regards,

Burkhard

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux