Re: snaptrim blocks io on ceph pacific even on fast NVMEs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> How many osd you have on 1 nvme drives?
> We increased 2/nvme to 4/nvme and it improved the snap-trimming quite a lot.

Interesting.  Most analyses I’ve seen report diminishing returns with more than two OSDs per.

There are definitely serialization bottlenecks in the PG and OSD code, so I’m curious re the number and size of the NVMe devices you’re using, and especially their PG ratio.  Not lowballing the PGs per OSD can have a similar effect with less impact to CPU and RAM.  ymmv.

> I guess the utilisation of the nvmes when you snaptrim is not 100%.

Take the iostat %util field with a grain of salt, like the load average.  Both are traditional metrics whose meanings have diffused as systems have evolved over the years.

— aad
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux