On 11/23/22 19:49, Marc wrote:
We would like to share our experience upgrading one of our clusters from
Nautilus (14.2.22-1bionic) to Pacific (16.2.10-1bionic) a few weeks ago.
To start with, we had to convert our monitors databases to rockdb in
Weirdly I have just one monitor db in leveldb still. Is still recommend removing and adding the monitor? Or can this be converted?
cat /var/lib/ceph/mon/ceph-b/kv_backend
Yes, we didn't find anywhere a mention that monitors database should be
RockDB for Pacific, but in practice after
we upgraded the monitors, they would idle increasing memory usage until
OOMs happened.
To migrate the database we just recreated them, removing and adding them
back again. It proved to be the easiest
way and it worked smoothly. Before we tried modifying the |kv_backend| I
believe , but it was not successful.
into big performance issues with snaptrims. The I/O of the cluster was
nearly stalled when our regular snaptrim tasks run. IcePic
pointed us to try compacting the OSDs. This solved it for us. It seems
How did you do this, can this be done upfront or should this be done after the upgrade?
ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-15 compact
I tried getting the status, but it failed because the osd was running. Should I prepare for stopping/starting all the osd daemons to do this compacting?
We did the compacting of OSDs after the upgrade, and we did the
compacting live, so no need to stop OSDs.
|ceph daemon osd.0 compact|
This task increases the CPU usage of the OSD at first. This can last a
while, in average around 10 minutes in our case.
We did the compacting of several OSDs in parallel but not all OSDs at
once out of precaution. For us it didn't seem to have any
impact on the cluster performance.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx