Re: migrate wal/db to block device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Chris,

haven't checked you actions thoroughly but migration to be done on a down OSD which is apparently not the case here.

May be that's a culprit and we/you somehow missed the relevant error during the migration process?


Thanks,

Igor

On 11/15/2023 5:33 AM, Chris Dunlop wrote:
Hi,

What's the correct way to migrate an OSD wal/db from a fast device to the (slow) block device?

I have an osd with wal/db on a fast LV device and block on a slow LV device. I want to move the wal/db onto the block device so I can reconfigure the fast device before moving the wal/db back to the fast device.

This link says to use "ceph-volume lvm migrate" (I'm on pacific, but the quincy and reef docs are the same):

https://docs.ceph.com/en/pacific/ceph-volume/lvm/migrate/

I tried:

$ cephadm  unit --fsid ${fsid} --name osd.${osdid} stop
$ cephadm shell --fsid ${fsid} --name osd.${osdid} -- \
  ceph-volume lvm migrate --osd-id ${osdid} --osd-fsid ${osd_fsid} \
  --from db wal --target ${block_vglv}
$ systemctl stop ${osd_service}
$ systemctl start ${osd_service}

"cephadm ceph-volume lvm list" now shows only the (slow) block device whereas before the migrate it was showing both the block and db devices.  However "lsof" shows the new osd process still has the original fast wal/db device open and "iostat" shows this device is still getting i/o.

Also:

$ ls -l /var/lib/ceph/${fsid}/osd.${osdid}/block*

...shows both the "block" and "block.db" symlinks to the original separate devices.

And there are now no lv_tags on the original wal/db LV:

$ lvs -o lv_tags ${original_db_vg_lv}

Now I'm concerned there's device mismatch for this osd: "cephadm ceph-volume lvm list" believes there's no separate wal/db, but the osd is currently *using* the original separate wal/db.

I guess if the server were to restart this osd would be in all sorts of trouble.

What's going on there, and what can be done to fix it?  Is it a matter of recreating the tags on the original db device?  (But then what happens to whatever did get migrated to the block device - e.g. is that space lost?) Or is it a matter of using ceph-bluestore-tool to do a bluefs-bdev-migrate, e.g. something like:

$ cephadm  unit --fsid ${fsid} --name osd.${osdid} stop
$ osddir=/var/lib/ceph/osd/ceph-${osdid}
$ cephadm shell --fsid ${fsid} --name osd.${osdid} -- \
  ceph-bluestore-tool --path ${osddir} --devs-source ${osddir}/block.db \
  --dev-target ${osddir}/block bluefs-bdev-migrate
$ rm /var/lib/ceph/${fsid}/osd.${osdid}/block.db
$ systemctl stop ${osd_service}
$ systemctl start ${osd_service}

Or... something else?


And how *should* moving the wal/db be done?

Cheers,

Chris
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

--
Igor Fedotov
Ceph Lead Developer

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Martin Verges - VAT-ID: DE310638492
Com. register: Amtsgericht Munich HRB 231263
Web: https://croit.io | YouTube: https://goo.gl/PGE1Bx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux