Hi Igor,
On Wed, Nov 15, 2023 at 12:30:57PM +0300, Igor Fedotov wrote:
Hi Chris,
haven't checked you actions thoroughly but migration to be done on a
down OSD which is apparently not the case here.
May be that's a culprit and we/you somehow missed the relevant error
during the migration process?
The migration was done with the container still running, but the osd
process was stopped within the container, like so:
$ cephadm unit --fsid ${fsid} --name osd.${osdid} stop
I've confirmed that command indeed stops the ceph-osd process.
I restored the tags on both the db and block LVs (the db LV had all it's
tags removed, and the block LV had the db_device and db_uuid tags removed
during the previous "lvm migrate" attempt) and confirmed "ceph-volume lvm
list" then returned the same as before the previous "lvm migrate" attempt.
(I'm pretty sure "ceph-volume lvm list" just reads the tags direct from
the LVs and presents them in a formatted output.)
I then tried the migrate again, this time stopping the container before
the migrate:
$ systemctl stop "${osd_service}"
$ cephadm shell --fsid "${fsid}" --name "osd.${osd}" -- \
ceph-volume lvm migrate --osd-id "${osd}" --osd-fsid "${osd_fsid}" \
--from db wal --target "${vg_lv}"
$ systemctl start "${osd_service}"
Unfortunately that had precisely the same result:
- "lsof" shows the new osd process still has the original fast wal/db
device open
- "iostat" shows this device is still getting i/o
- both "ceph-volume lvm list" and "lvs -o tag" show all the tags have been
removed from the db device, and the db_device and db_uuid tags have been
removed from the block device.
Notably, whilst the "lvm migrate" is running, "iostat" on the db device
shows very high read activity (and no write activity), so it's certainly
reading whatever is on there, presumably to copy the data to the block
device.
However even after the migrate something is making the osd start up with
the original db device rather than using the block device for the db.
Any ideas?
Cheers,
Chris
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx