Spreading deep-scrubbing load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I’ve just started looking into one of our ceph clusters because a weekly deep scrub had a major IO impact on the cluster which caused multiple VMs to grind to a halt.

So far I’ve discovered that this particular cluster is configured incorrectly for the number of PGS per OSD. Currently that setting is 6 but should be closer to ~4096 based on the calc tool.

If I change the number of PGS to the suggested values what should I expect specially around the deep scrub performance but also just in general as I’m very new to ceph. What I’m hoping will happen is that instead of a single weekly deep scrub that runs for 24+ hours we would have lots of smaller deep scrubs that can hopefully finish in a reasonable time with minimal cluster impact.

Thanks.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux