Yeah, I've seen this happen when replacing osds. Like Eugen said, there's some services that get created for mounting the volumes. You can disable them like this: systemctl disable ceph-volume@lvm-{osdid}-{fsid}.service list the contents of /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-* to find the enabled ones. If you've got a lot of them I usually disable them all for service in /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-*; do service=`basename $service`; sudo systemctl disable $service; done then re-enable them from the lvm tags to get just the ones that exist, like this: for lv in /dev/mapper/ceph* do osd=`sudo lvs -o lv_tags $lv | tail -1 | grep -Po "ceph.osd_id=([0-9]*)" | gawk -F= '{ print $2 }'` fsid=`sudo lvs -o lv_tags $lv | tail -1 | grep -Po "ceph.osd_fsid=([a-z\-_0-9]*)" | gawk -F= '{ print $2 }'` sudo systemctl enable ceph-volume@lvm-$osd-$fsid sudo systemctl enable --runtime ceph-osd@$osd.service done Rich On Thu, 21 Apr 2022 at 20:29, Eugen Block <eblock@xxxxxx> wrote: > > These are probably remainders of previous OSDs, I remember having to > clean up orphaned units from time to time. Compare the UUIDs to your > actual OSDs and disable the units of the non-existing OSDs. > > Zitat von Marc <Marc@xxxxxxxxxxxxxxxxx>: > > > I added some osd's which are up and running with: > > > > ceph-volume lvm create --data /dev/sdX --dmcrypt > > > > But I am still getting such messages of the newly created osd's > > > > systemd: Job > > dev-disk-by\x2duuid-7a8df80d\x2d4a7a\x2d469f\x2d868f\x2d8fd9b7b0f09d.device/start timed > > out. > > systemd: Timed out waiting for device > > dev-disk-by\x2duuid-7a8df80d\x2d4a7a\x2d469f\x2d868f\x2d8fd9b7b0f09d.device. > > systemd: Dependency failed for /var/lib/ceph/osd/ceph-11. > > systemd: Job > > dev-disk-by\x2duuid-864b01aa\x2d1abf\x2d4dc0\x2da532\x2dced7cb321f4a.device/start timed > > out. > > systemd: Timed out waiting for device > > dev-disk-by\x2duuid-864b01aa\x2d1abf\x2d4dc0\x2da532\x2dced7cb321f4a.device. > > systemd: Dependency failed for /var/lib/ceph/osd/ceph-1. > > > > > > > > ceph 14.2.22 > > > > > > _______________________________________________ > > ceph-users mailing list -- ceph-users@xxxxxxx > > To unsubscribe send an email to ceph-users-leave@xxxxxxx > > > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx