On 10/6/2022 3:16 PM, Stefan Kooman wrote:
On 10/6/22 13:41, Frank Schilder wrote:
Hi Stefan,
thanks for looking at this. The conversion has happened on 1 host
only. Status is:
- all daemons on all hosts upgraded
- all OSDs on 1 OSD-host were restarted with
bluestore_fsck_quick_fix_on_mount = true in its local ceph.conf,
these OSDs completed conversion and rebooted, I would assume that the
freshly created OMAPs are compacted by default?
As far as I know it's not.
According to https://tracker.ceph.com/issues/51711 compaction is applied
after OMAP upgrade starting v15.2.14
- unfortunately, the converted SSD-OSDs on this host died
- now SSD OSDs on other (un-converted) hosts also start crashing
randomly and very badly (not possible to restart due to stuck D-state
processes)
Does compaction even work properly on upgraded but unconverted OSDs?
yes, compaction is available irrespective to the data format which OSD
uses for keeping in DB. Hence both converted and unconverted OSDs can
benefit from it.
We have done serveral measurements based on production data (clones of
data disks from prod.). In this case the conversion from octopus to
pacific, and the resharding as well). We would save half the time by
compacting them before hand. It would take, in our case, many hours to
do a conversion, so it would pay off immensely. So yes, you can do
this. Not sure if I have tested this on Octopus conversion, but as the
conversion to pacific involves a similar process it's safe to assume
it will be the same.
Gr. Stefan
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Igor Fedotov
Ceph Lead Developer
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx