Hi, I found the reasen of this behavior change. With 14.2.10 the default value of "bluefs_buffered_io" was changed from true to false. https://tracker.ceph.com/issues/44818 configureing this to true my problems seems to be solved. Regards Manuel On Wed, 5 Aug 2020 13:30:45 +0200 Manuel Lausch <manuel.lausch@xxxxxxxx> wrote: > Hello Vladimir, > > I just tested this with a single node testcluster with 60 HDDs (3 of > them with bluestore without separate wal and db). > > With the 14.2.10, I see on the bluestore OSDs a lot of read IOPs while > snaptrimming. With 14.2.9 this was not an issue. > > I wonder if this would explain the huge amount of slowops on my big > testcluster (44 Nodes 1056 OSDs) while snaptrimming. I > cannot test a downgrade there, because there are no packages of older > releases for CentOS 8 available. > > Regards > Manuel > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx