Den mån 22 nov. 2021 kl 09:03 skrev Janne Johansson <icepic.dz@xxxxxxxxx>: > > Den mån 22 nov. 2021 kl 06:52 skrev GHui <ugiwgh@xxxxxx>: > > > > I have do "systemctl restart ceph.target". But the osd service is not started. > > It's strange that the osd.2 is up, but I cann't find the osd service is up, or the osd container is up. > > [root@GHui cephconfig]# ceph osd df > > ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS > > 0 ssd 1.74660 1.00000 0 B 0 B 0 B 0 B 0 B 0 B 0 1.00 0 down > > 1 ssd 1.74660 1.00000 0 B 0 B 0 B 0 B 0 B 0 B 0 1.00 0 down > > 4 ssd 0.36389 1.00000 0 B 0 B 0 B 0 B 0 B 0 B 0 1.00 0 down > > 2 ssd 1.74660 1.00000 0 B 0 B 0 B 0 B 0 B 0 B 0 1.00 0 up > > 3 ssd 1.74660 1.00000 0 B 0 B 0 B 0 B 0 B 0 B 0 1.00 0 down > > 5 ssd 0.36389 1.00000 0 B 0 B 0 B 0 B 0 B 0 B 0 1.00 0 down > > TOTAL 0 B 0 B 0 B 0 B 0 B 0 B 0 > > MIN/MAX VAR: 1.00/1.00 STDDEV: 0 > > I would very much appreciate any advice. > > Looks a bit like when you create your OSDs pointing to sda, sdb and so > on, then reboot and the system assigns new letters (or new numbers for > dm-1, dm-2..) and now the links under /var/lib/ceph/osd/*/.. points > wrong. > > -- > May the most significant bit of your life be positive. And to "fix" this you need to repair all the links lrwxrwxrwx. 1 ceph ceph 48 May 13 2019 block -> /dev/mapper/5785afc3-bfbb-47b8-8343-4a532888b912 Perhaps "ceph-volume lvm list" and/or "ceph-volume inventory" can help identify which raw drive was related to which osd-number. -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx