On 02/20/2018 03:05 PM, Dan van der Ster wrote:
Hi Wido,
When you finish updating all osds in a cluster to luminous, the last step:
ceph osd require-osd-release luminous
actually sets the recovery_deletes flag.
All our luminous clusters have this enabled:
# ceph osd dump | grep recovery
flags sortbitwise,recovery_deletes
Yes, I noticed.
And that super secret redhat link explains that recovery_deletes
allows deletes to take place during recovery instead of at peering
time, which was previously the case.
Ok! The source told me that as well, but can somebody tell me the exact
benefit of this?
Does it improve/smoothen the peering process?
I heard rumors that it makes peering block less, but I'm not sure. I
like to hear facts or experiences :)
Wido
-- Dan
On Tue, Feb 20, 2018 at 2:50 PM, Wido den Hollander <wido@xxxxxxxx> wrote:
Hi,
I was diffing the OSDMap between a Jewel and Luminous cluster and found the
'recovery_deletes' flag in the OSDMap.
Searching the internet I couldn't find much about this flag, except for this
Red Hat URL which is subscription only:
https://access.redhat.com/solutions/3200572
I looked through the source code and I found a few comments:
bool recovery_deletes = false; ///< whether the deletes are performed during
recovery instead of peering
#define CEPH_OSDMAP_RECOVERY_DELETES (1<<19) /* deletes performed during
recovery instead of peering */
Setting the flag is easy:
$ ceph osd set recovery_deletes
This means that deletes are performed during recovery and not during
peering, but what is to expect from this flag? Will it improve the peering
process and reduce the amount of blocked/slow I/O after a OSD boot?
Wido
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html