Oh right, I responded from my mobile phone and missed the examples.
Thanks for the clarification!
OP did stop the OSD according to his output:
$ cephadm unit --fsid ${fsid} --name osd.${osdid} stop
But there might have been an error anyway, I guess.
Zitat von Igor Fedotov <igor.fedotov@xxxxxxxx>:
Hi Eugen,
this scenario is supported, see the last example on the relevant doc page:
Moves BlueFS data from main, DB and WAL devices to main device, WAL
and DB are removed:
ceph-volume lvm migrate --osd-id 1 --osd-fsid <uuid> --from
db wal --target vgname/data
Thanks,
Igor
On 11/15/2023 11:20 AM, Eugen Block wrote:
Hi,
AFAIU, you can’t migrate back to the slow device. It’s either
migrating from the slow device to a fast device or remove between
fast devices. I’m not aware that your scenario was considered in
that tool. The docs don’t specifically say that, but they also
don’t mention going back to slow device only. Someone please
correct me, but I’d say you’ll have to rebuild that OSD to detach
it from the fast device.
Regards,
Eugen
Zitat von Chris Dunlop <chris@xxxxxxxxxxxx>:
Hi,
What's the correct way to migrate an OSD wal/db from a fast device
to the (slow) block device?
I have an osd with wal/db on a fast LV device and block on a slow
LV device. I want to move the wal/db onto the block device so I
can reconfigure the fast device before moving the wal/db back to
the fast device.
This link says to use "ceph-volume lvm migrate" (I'm on pacific,
but the quincy and reef docs are the same):
https://docs.ceph.com/en/pacific/ceph-volume/lvm/migrate/
I tried:
$ cephadm unit --fsid ${fsid} --name osd.${osdid} stop
$ cephadm shell --fsid ${fsid} --name osd.${osdid} -- \
ceph-volume lvm migrate --osd-id ${osdid} --osd-fsid ${osd_fsid} \
--from db wal --target ${block_vglv}
$ systemctl stop ${osd_service}
$ systemctl start ${osd_service}
"cephadm ceph-volume lvm list" now shows only the (slow) block
device whereas before the migrate it was showing both the block
and db devices. However "lsof" shows the new osd process still
has the original fast wal/db device open and "iostat" shows this
device is still getting i/o.
Also:
$ ls -l /var/lib/ceph/${fsid}/osd.${osdid}/block*
...shows both the "block" and "block.db" symlinks to the original
separate devices.
And there are now no lv_tags on the original wal/db LV:
$ lvs -o lv_tags ${original_db_vg_lv}
Now I'm concerned there's device mismatch for this osd: "cephadm
ceph-volume lvm list" believes there's no separate wal/db, but the
osd is currently *using* the original separate wal/db.
I guess if the server were to restart this osd would be in all
sorts of trouble.
What's going on there, and what can be done to fix it? Is it a
matter of recreating the tags on the original db device? (But
then what happens to whatever did get migrated to the block device
- e.g. is that space lost?)
Or is it a matter of using ceph-bluestore-tool to do a
bluefs-bdev-migrate, e.g. something like:
$ cephadm unit --fsid ${fsid} --name osd.${osdid} stop
$ osddir=/var/lib/ceph/osd/ceph-${osdid}
$ cephadm shell --fsid ${fsid} --name osd.${osdid} -- \
ceph-bluestore-tool --path ${osddir} --devs-source ${osddir}/block.db \
--dev-target ${osddir}/block bluefs-bdev-migrate
$ rm /var/lib/ceph/${fsid}/osd.${osdid}/block.db
$ systemctl stop ${osd_service}
$ systemctl start ${osd_service}
Or... something else?
And how *should* moving the wal/db be done?
Cheers,
Chris
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
--
Igor Fedotov
Ceph Lead Developer
Looking for help with your Ceph cluster? Contact us athttps://croit.io
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web:https://croit.io | YouTube:https://goo.gl/PGE1Bx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx