Hi Anthony > > Did you add a bunch of data since then, or change the Ceph release? Do > you have bluefs_buffered_io set to false? > > We did not change ceph release in the meantime. It is very welll possible that the delays were just not noticed during out previous maintenances. bluefs_buffered_io is set to false (default setting in 14.2.11). I posted a question about this setting some time ago without any respons. Perhaps you are able to answer this: - If bluefs_buffered_io is set to false does that mean that all ceph buffering is done in the osd processes? Or is the linux buffer still used somewhere?? If the linux buffer is still used then what would be your advices in setting the osd_memory_target vs leaving space for linux buffers > > PGs block while peering, so it pays to spread out the peering load. > > Scrubs vary as a function of a number of things. Remember that shallow > scrubs are cheap and frequent, so if you have downtime theyâ??ll need to > catch up when they come back. Especially if you also limit the times of > day when scrubs can run (which is usually a bad idea). Scrubs are not > themselves part of peering. Thank you we will keep that in mind for the next maintenance > > Are you using EC? What networking technology? > > We are not using EC. The network for OSD's consist of 2 x 10Gbps link in a bond. There is no separate cluster network. So far it look slike the network is nowhere used near its maximum Kind Regards Marcel _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx