Hi,
is there any specific reason why you do it manually instead of letting
cephadm handle it? I might misremember but I believe for the manual
lvm activation to work you need to pass the '--no-systemd' flag.
Regards,
Eugen
Zitat von Dominique Ramaekers <dominique.ramaekers@xxxxxxxxxx>:
Hi,
I've setup a ceph cluster using cephadmin on three ubuntu servers.
Everything went great until I tried to activate a osd prepared on a
lvm.
I have prepared 4 volumes with the command:
ceph-volume lvm prepare --data vg/lv
Now I try to activate one of them with the command (followed by the output):
root@hvs001:/# ceph-volume lvm activate 0
25bfe96a-4f7a-47e1-8644-b74a4d104dbc
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph
prime-osd-dir --dev /dev/hvs001_sda2/lvol0 --path
/var/lib/ceph/osd/ceph-0 --no-mon-config
Running command: /usr/bin/ln -snf /dev/hvs001_sda2/lvol0
/var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Running command: /usr/bin/systemctl enable
ceph-volume@lvm-0-25bfe96a-4f7a-47e1-8644-b74a4d104dbc
stderr: Created symlink
/etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-25bfe96a-4f7a-47e1-8644-b74a4d104dbc.service ->
/usr/lib/systemd/system/ceph-volume@.service.
Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
stderr: Created symlink
/run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service ->
/usr/lib/systemd/system/ceph-osd@.service.
Running command: /usr/bin/systemctl start ceph-osd@0
stderr: Failed to connect to bus: No such file or directory
--> RuntimeError: command returned non-zero exit status: 1
Seems systemd isn't playing along?
Please advice.
Some additional backround info:
root@hvs001:/# ceph status
cluster:
id: dd4b0610-b4d2-11ec-bb58-d1b32ae31585
health: HEALTH_OK
services:
mon: 3 daemons, quorum hvs001,hvs002,hvs003 (age 23m)
mgr: hvs001.baejuo(active, since 23m), standbys: hvs002.etijdk
osd: 4 osds: 0 up, 2 in (since 36m)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
root@hvs001:/# ceph-volume lvm list
====== osd.0 =======
[block] /dev/hvs001_sda2/lvol0
block device /dev/hvs001_sda2/lvol0
block uuid 6cEw8v-5xIA-K76l-7zIN-V2BK-RNWD-yGwfqp
cephx lockbox secret
cluster fsid dd4b0610-b4d2-11ec-bb58-d1b32ae31585
cluster name ceph
crush device class
encrypted 0
osd fsid 25bfe96a-4f7a-47e1-8644-b74a4d104dbc
osd id 0
osdspec affinity
type block
vdo 0
devices /dev/sda2
====== osd.1 =======
[block] /dev/hvs001_sdb3/lvol1
....
Greetings,
Dominique.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx