Hi Wido, When you finish updating all osds in a cluster to luminous, the last step: ceph osd require-osd-release luminous actually sets the recovery_deletes flag. All our luminous clusters have this enabled: # ceph osd dump | grep recovery flags sortbitwise,recovery_deletes And that super secret redhat link explains that recovery_deletes allows deletes to take place during recovery instead of at peering time, which was previously the case. -- Dan On Tue, Feb 20, 2018 at 2:50 PM, Wido den Hollander <wido@xxxxxxxx> wrote: > Hi, > > I was diffing the OSDMap between a Jewel and Luminous cluster and found the > 'recovery_deletes' flag in the OSDMap. > > Searching the internet I couldn't find much about this flag, except for this > Red Hat URL which is subscription only: > https://access.redhat.com/solutions/3200572 > > I looked through the source code and I found a few comments: > > bool recovery_deletes = false; ///< whether the deletes are performed during > recovery instead of peering > > #define CEPH_OSDMAP_RECOVERY_DELETES (1<<19) /* deletes performed during > recovery instead of peering */ > > Setting the flag is easy: > > $ ceph osd set recovery_deletes > > This means that deletes are performed during recovery and not during > peering, but what is to expect from this flag? Will it improve the peering > process and reduce the amount of blocked/slow I/O after a OSD boot? > > Wido > -- > To unsubscribe from this list: send the line "unsubscribe ceph-devel" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html