I have 15 Nvme 6.4 TB and 4200 Pgs. This has historical reasons since the same cluster once had 100 spinning drives and at that ancient times it was not possible to shrink the number of pgs in an existing cluster. Do you think it is a good idea for snaptrim performance to reduce the number of PGs ? Also I want to add more Nvmes. Ist it better to first add the new NVMEs and then reduce PGs or is it better to first reduche the number of PGs and then add the NVMEs ? On Wed, Nov 10, 2021 at 04:04:52PM -0800, Anthony D'Atri wrote: > > > > How many osd you have on 1 nvme drives? > > We increased 2/nvme to 4/nvme and it improved the snap-trimming quite a lot. > > Interesting. Most analyses I’ve seen report diminishing returns with more than two OSDs per. > > There are definitely serialization bottlenecks in the PG and OSD code, so I’m curious re the number and size of the NVMe devices you’re using, and especially their PG ratio. Not lowballing the PGs per OSD can have a similar effect with less impact to CPU and RAM. ymmv. > > > I guess the utilisation of the nvmes when you snaptrim is not 100%. > > Take the iostat %util field with a grain of salt, like the load average. Both are traditional metrics whose meanings have diffused as systems have evolved over the years. > > — aad > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx -- Hard times create strong men. Strong men create good times.Good times create weak men. And weak men create hard times. Christoph Adomeit GATWORKS GmbH Metzenweg 78 41068 Moenchengladbach Sitz: Moenchengladbach Amtsgericht Moenchengladbach, HRB 6303 Geschaeftsfuehrer: Christoph Adomeit, Hans Wilhelm Terstappen Christoph.Adomeit@xxxxxxxxxxx Internetloesungen vom Feinsten Fon. +49 2161 68464-32 Fax. +49 2161 68464-10 _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx