I just tried it with 18.2.4:
# ceph-volume lvm new-db --osd-id 0 --osd-fsid
fb69ba54-4d56-4c90-a855-6b350d186df5 --target ceph-db/ceph-osd0-db
--> Making new volume at /dev/ceph-db/ceph-osd0-db for OSD: 0
(/var/lib/ceph/osd/ceph-0)
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block.db
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-10
--> New volume attached.
You don't explicitly mention it, but did you just run 'cephadm shell'
or 'cephadm shell --name osd.10'? I'd recommend the latter.
Zitat von Alan Murrell <Alan@xxxxxxxx>:
Hello,
I posted this on the Reddit sub, but it was suggested I might get
better responses here.
I am running a 5-node Ceph cluster (v18.2.2) installed using "cephadm".
I am trying to migrate the DB/WAL on our slower HDDs to NVME; I am
following this article:
https://docs.clyso.com/blog/ceph-volume-create-wal-db-on-separate-device-for-existing-osd/
I have a 1TB NVME in each node, and there are four HDDs. I have
created the VG ("cephdbX", where "X" is the node number) and four
equal-sized LVs ("cephdb1", "cephdb2", "cephdb3", "cephdb4").
On the node I am trying to move the DB/WAL first, I have stopped the
systemd OSD service for the OSD I am doing this first to.
I have switched into the cephadm shell so I can run the ceph-volume
commands, but when I run:
ceph-volume lvm new-db --osd-id 10 --osd-fsid
474264fe-b00e-11ee-b586-ac1f6b0ff21a --target cephdb03/cephdb1
I get the following error:
--> Target path cephdb03/cephdb1 is not a Logical Volume
Unable to attach new volume : cephdb03/cephdb1
If I run 'lvs' in the cephadm shell, I can see the LVs (sorry about
the formatting; hopefully you can view it in a way that it isn't a
big "jumble"):
LV VG
Attr LSize Pool Origin Data% Meta%
Move Log Cpy%Sync Convert
osd-block-f85a57a8-e2f5-4bda-bc3b-e99d8b70768b
ceph-341561e6-da91-4678-b6c8-0f0281443945 -wi-ao---- <1.75t
osd-block-f1fd3d53-4ed9-4492-82a0-4686231d57e1
ceph-65ebde73-28ac-4dac-b0cb-4cf8df18bd4b -wi-ao---- 16.37t
osd-block-3571394c-3afa-4177-904a-17550f8e902c
ceph-6c8de2ed-cae3-4dd9-9ea8-49c94b746878 -wi-a----- 16.37t
osd-block-41d44327-3df7-4166-a675-d9630bde4867
ceph-703962c7-6f28-4d8b-b77f-a6eba39da6b2 -wi-ao---- <1.75t
osd-block-438c7681-ee6b-4d29-91f5-d487377c3ac9
ceph-71cc35c4-436d-42b7-a704-b21c2d22b43b -wi-ao---- 16.37t
osd-block-2ebf78e8-1de1-464e-9125-14a8b7e6796f
ceph-7c1fe149-8500-4a41-9052-64f27b2cb70b -wi-ao---- <1.75t
osd-block-ca347144-eb84-4e9f-bfb5-81d60659f417
ceph-92595dfe-dc70-47c7-bcab-65b26d84448c -wi-ao---- 16.37t
osd-block-2d338a42-83ce-4281-9762-b268e74f83b3
ceph-e9b51fa2-2be1-40f3-b96d-fb0844740afa -wi-ao---- <1.75t
cephdb1 cephdb03
-wi-a----- 232.00g
cephdb2 cephdb03
-wi-a----- 232.00g
cephdb3 cephdb03
-wi-a----- 232.00g
cephdb4 cephdb03
-wi-a----- 232.00g
lv_root cephnode03-20240110
-wi-ao---- <468.36g
lv_swap cephnode03-20240110
-wi-ao---- <7.63g
All the official docs I read about it seem to assume the Ceph
components are installed directly on the host, rather than in
containers (which is what 'cephadm' does)
Any advice for migrating the DB/WAL to the SSDs when using 'cephadm'?
(I could probably destroy the OSD and manually re-create it with the
options for pointing the DB/WAL to the SSD, but I would rather do it
without forcing a data migration, otherwise I would have to wait for
that with each OSD I am migrating)
Followup question: if I do seem to have to go the route of
destroying the OSD and re-creating it with the flags for point the
DB/WAL to the SSD partition, the syntax I have seen is this:
ceph-volume lvm prepare --bluestore --block.db --block.wal --data
VOLUME_GROUP/LOGICAL_VOLUME
but the above command seems to only specify the location for the
data? Should the syntax be more like:
ceph-volume lvm prepare --bluestore --block.db VG2/LV1 --block.wal
VG2/LV1 --data /dev/sda
(assuming my HDD is /dev/sda)?
Thanks! :-)
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx