Re: Merge DB/WAL back to the main device?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

while in fact the db+wal LV/PV/VG is on a _partition_ of that device, nvme1n1p4.

the metadata.devices only contains the main device, not the partition.

which worked. Can ceph-volume zap be used for this as well?

Usually, I use 'ceph-volume lvm zap --destroy...', have you tried it with the keyword 'lvm' as well?

I decided to write a short blog post [0] to describe both migrating to a new DB device and back from DB to the main device.

[0] https://heiterbiswolkig.blogs.nde.ag/2025/02/05/cephadm-migrate-db-wal-to-new-device/

Zitat von Jan Kasprzak <kas@xxxxxxxxxx>:

Holger, Eugen,

thanks! I tried the ceph-volume approach, and it worked.
The only strange thing was that "ceph osd metadata $ID | grep devices"
reports

    "bluefs_db_devices": "nvme1n1",

while in fact the db+wal LV/PV/VG is on a _partition_ of that device, nvme1n1p4.

Another problem is how to remove the volume group on that device.
I expected "ceph-volume zap /dev/nvme1n1p4" to be able to do it,
but it failed with:

stderr: wipefs: error: /dev/nvme1n1p4: probing initialization failed: Device or resource busy --> failed to wipefs device, will try again to workaround probable race condition

In the end I did

osd# vgchange -a n ceph-1cbc1f4d-9043-4a10-844b-c7fe28b4a333
osd# vgremove ceph-1cbc1f4d-9043-4a10-844b-c7fe28b4a333

which worked. Can ceph-volume zap be used for this as well?

FWIW, all the commands I did:

mon# ceph osd metadata 8 | grep devices
    "bluefs_db_devices": "nvme1n1",
    "bluestore_bdev_devices": "sda",
    "devices": "nvme1n1,sda",
    "objectstore_numa_unknown_devices": "sda",
mon# ceph osd set noout

osd# ls -l /var/lib/ceph/osd/ceph-8
lrwxrwxrwx. 1 ceph ceph 93 Feb 3 22:11 block -> /dev/ceph-179dc46e-f620-4b54-b663-24e5be779b3b/osd-block-d38e32da-fc81-4744-ad3a-750ba2cf3bc6 lrwxrwxrwx. 1 ceph ceph 90 Feb 3 22:11 block.db -> /dev/ceph-1cbc1f4d-9043-4a10-844b-c7fe28b4a333/osd-db-ce2a1653-3cf3-4eac-b74c-3d49bb4ef170
osd# systemctl stop ceph-osd@8.service
osd# ceph-volume lvm migrate --osd-id 8 --osd-fsid d38e32da-fc81-4744-ad3a-750 osd# ba2cf3bc6 --from db wal --target ceph-179dc46e-f620-4b54-b663-24e5be779b3b/osd-block-d38e32da-fc81-4744-ad3a-750ba2cf3bc6
osd# vgchange -a n ceph-1cbc1f4d-9043-4a10-844b-c7fe28b4a333
osd# vgremove ceph-1cbc1f4d-9043-4a10-844b-c7fe28b4a333
osd# systemctl start ceph-osd@8.service

mon# ceph osd unset noout
mon# watch ceph -s # till it reports HEALTH_OK


Thanks,

-Yenya

--
| Jan "Yenya" Kasprzak <kas at {fi.muni.cz - work | yenya.net - private}> |
| https://www.fi.muni.cz/~kas/                        GPG: 4096R/A45477D5 |
    We all agree on the necessity of compromise. We just can't agree on
    when it's necessary to compromise.                     --Larry Wall
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux