Re: ceph-mon rocksdb write latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11-01-2022 09:36, Anthony D'Atri wrote:
Our hosts run all NVMe

Which drives, specifically? And how many OSDs per? How many PGs per OSD?

It is 3 types of devices:
* HPE NS204i-p Gen10+ Boot Controller
 - stores the /var/lib/ceph folder
* HPE 7.68TB NVMe x4 RI SFF SC U.3 SSD
 - We have 3 osd's per drive on these

We have only debugged this on the ceph-mon nodes.
We have max 300 pg's per osd



While digging a little deeper with biosnoop I found that when we get the etcd errors, rocksdb is also writing, everytime it happens.

How intense is the workload?  Could it be that what you’re seeing
specifically is compaction?

The cluster is more or less idle.
That could be the case that it is compaction - how can I verify that ?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux