Re: cephadm: Move DB/WAL from HDD to SSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm not sure why it fails, but it seems like you deviate a bit from the instructions. If you want to migrate the db to a new device, you need to specify an existing VG and LV, in this case it is not created for you. And I'm not really sure why you run 'ceph-volume lvm activate --all --no-systemd', that's not necessary. So I'll try to provide a complete list of steps, hopefully that works for you as it does for me:

1. soc9-ceph:~ # vgcreate ceph-db /dev/vdf
2. soc9-ceph:~ # lvcreate -L 5G -n ceph-osd0-db ceph-db (mind the LV size, just a test cluster here)
3. soc9-ceph:~ # ceph orch daemon stop osd.0
4. soc9-ceph:~ # cephadm shell --name osd.0
5. [ceph: root@soc9-ceph /]# ceph-volume lvm new-db --osd-id 0 --osd-fsid fb69ba54-4d56-4c90-a855-6b350d186df5 --target ceph-db/ceph-osd0-db 6. [ceph: root@soc9-ceph /]# ceph-volume lvm migrate --osd-id 0 --osd-fsid fb69ba54-4d56-4c90-a855-6b350d186df5 --from /var/lib/ceph/osd/ceph-0/block --target ceph-db/ceph-osd0-db
7. Exit shell
8. soc9-ceph:~ # ceph orch daemon start osd.0
9. Verify db config: soc9-ceph:~ # ceph tell osd.0 perf dump bluefs | jq -r '.[].db_total_bytes,.[].db_used_bytes'
5368700928
47185920

So as you see, the OSD has picked up the new db device and uses 47 MB (it's an empty test cluster). Also note that this is a single-node cluster, so the orchestrator commands and shell commands are all executed on the same host.

Let me know how it goes.


Zitat von Alan Murrell <Alan@xxxxxxxx>:

Ok, just gave it a try and I am still running into an error. Here is exactly what I did:

I logged on to my host where osd.10 is

Deleted my current VG and LV's on my NVME that will hold the WAL/DB, as I kind of liked what you used. My VG is called "cephdb03" and my LVs are called "ceph-osd-dbX", where "X" is 1 through 4.

Ran the command to stop osd.10 service:

systemctl stop ceph-474264fe-b00e-11ee-b586-ac1f6b0ff21a@osd.10

connected to the general cephadm shell and ran:

ceph-volume lvm activate --all --no-systemd

Exited the general shel and entered the container for OSD 10:

cephadm shell name osd.10

Ran the ceph-volume command to create the new DB on cephdb03/ceph-osd-db1 :

ceph-volume lvm new-db --osd-id 10 --osd-fsid 474264fe-b00e-11ee-b586-ac1f6b0ff21a --target cephdb03/ceph-osd-db1

Got the following error:

--> Unable to find any LV for source OSD: id:10 fsid:474264fe-b00e-11ee-b586-ac1f6b0ff21a
Unexpected error, terminating

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux