Is there any specific reason to try that manually this way? Why not
let cephadm handle that for you? There are scenarios where I need to
create VGs/LVs manually as well, but I do it from the host, not within
cephadm, and then let cephadm manage the devices.
Zitat von Ed Krotee <ed.krotee@xxxxxxx>:
Trying to create the Block and Block.DB devices for BlueStore.
Within the cephadm shell able to run the vgcreate but get the
following errors and we cannot see the vg device in /dev so doesn't
seem to actually create the vg, but vgs within the cephadm shell
shows it, but vgs at the o/s level does not. FYI - SE Linux is
disabled.
stdout: Physical volume "/dev/sda" successfully created.
Not creating system devices file due to existing VGs.
stdout: Volume group "ceph-b80b7206-0c2e-4770-9895-51077b1d59d4"
successfully created
Running command: lvcreate --yes -l 5245439 -n
osd-block-b992b707-c77a-412d-9286-3b3ec1d8b3e9
ceph-b80b7206-0c2e-4770-9895-51077b1d59d4
stderr:
/dev/ceph-b80b7206-0c2e-4770-9895-51077b1d59d4/osd-block-b992b707-c77a-412d-9286-3b3ec1d8b3e9: not found: device not
cleared
Aborting. Failed to wipe start of new LV.
--> Was unable to complete a new OSD, will rollback changes
Running command: /usr/bin/ceph --cluster ceph --name
client.bootstrap-osd --keyring
/var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.0
--yes-i-really-mean-it
stderr: purged osd.0
--> RuntimeError: Unable to find any LV for zapping OSD: 0
[ceph: root@ritcephstrdata09 /]# lvcreate --yes -l 5245439 -n
osd-block-b992b707-c77a-412d-9286-3b3ec1d8b3e9
ceph-b80b7206-0c2e-4770-9895-51077b1d59d4
/dev/ceph-b80b7206-0c2e-4770-9895-51077b1d59d4/osd-block-b992b707-c77a-412d-9286-3b3ec1d8b3e9: not found: device not
cleared
Aborting. Failed to wipe start of new LV.
Any thoughts?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx