Hi Mark,
most probably you're suffering from a per-pool omap statistics update.
Which is performed during the first start of an upgraded OSD. One can
disable this behavior via 'bluestore_fsck_quick_fix_on_mount' flag. But
please expect incomplete OMAP usage report if stats are not updated.
Thanks,
Igor
On 8/19/2020 1:16 PM, Mark Schouten wrote:
Hi,
Last night I upgraded a Luminous cluster to Nautilus. All went well, but there was one sleep depriving issue I would like to prevent from happening next week while upgrading another cluster. Maybe you people can help me figure out what actually happened.
So I upgraded the packages and restarted mons and mgrs. Then I started restarting the OSD's on one of the nodes. Below are the start and 'start_boot' times, during which the disks read at full speed, I think the whole disk.
2020-08-19 02:08:10.568 7fd742b09c80 0 set uid:gid to 64045:64045 (ceph:ceph)
2020-08-19 02:09:33.591 7fd742b09c80 1 osd.8 2188 start_boot
2020-08-19 02:08:10.592 7fb453887c80 0 set uid:gid to 64045:64045 (ceph:ceph)
2020-08-19 02:17:40.878 7fb453887c80 1 osd.5 2188 start_boot
2020-08-19 02:08:10.836 7f907bc0cc80 0 set uid:gid to 64045:64045 (ceph:ceph)
2020-08-19 02:19:58.462 7f907bc0cc80 1 osd.3 2188 start_boot
2020-08-19 02:08:10.584 7f1ca892cc80 0 set uid:gid to 64045:64045 (ceph:ceph)
2020-08-19 03:13:24.179 7f1ca892cc80 1 osd.11 2188 start_boot
2020-08-19 02:08:10.568 7f059f80dc80 0 set uid:gid to 64045:64045 (ceph:ceph)
2020-08-19 04:06:55.342 7f059f80dc80 1 osd.14 2188 start_boot
So, while this is not an issue which breaks anything technical, I would like to know how I can arrange for the OSD to do this 'maintenance' beforehand so I don't have to wait too long. :)
I do see a warning in the logging, is that related: "store not yet converted to per-pool stats" ?
Thanks!
--
Mark Schouten <mark@xxxxxxxx>
Tuxis, Ede, https://www.tuxis.nl
T: +31 318 200208
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx