Removing osd in the Cluster map

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello team,

I am testing my ceph pacific cluster using Vms , which is integrated with
openstack . suddenly one of the hosts turned off and failed . I built
another host with same number of OSDs with the first one and redeploy again
the cluster . unfortunately the cluster still is up with 2 hosts , the
redeployment of the new one failed on this stage:


TASK [ceph-osd : wait for all osd to be up]
************************************
Tuesday 12 April 2022  15:45:43 +0000 (0:00:01.778)       0:03:26.013
*********
skipping: [ceph-osd1]
skipping: [ceph-osd2]
FAILED - RETRYING: waiting for all osd to be up

root@ceph-mon1:~# ceph osd tree
ID  CLASS  WEIGHT    TYPE NAME           STATUS  REWEIGHT  PRI-AFF
-1         15.62549  root default
-3          7.81274      host ceph-osd1
 0    hdd   0.97659          osd.0         down         0  1.00000
 3    hdd   0.97659          osd.3         down         0  1.00000
 6    hdd   0.97659          osd.6         down         0  1.00000
10    hdd   0.97659          osd.10        down         0  1.00000
12    hdd   0.97659          osd.12          up   1.00000  1.00000
13    hdd   0.97659          osd.13          up   1.00000  1.00000
14    hdd   0.97659          osd.14          up   1.00000  1.00000
15    hdd   0.97659          osd.15          up   1.00000  1.00000
-7          3.90637      host ceph-osd2
 1    hdd   0.97659          osd.1           up   1.00000  1.00000
 4    hdd   0.97659          osd.4           up   1.00000  1.00000
 7    hdd   0.97659          osd.7           up   1.00000  1.00000
 9    hdd   0.97659          osd.9           up   1.00000  1.00000
-5          3.90637      host ceph-osd3
 2    hdd   0.97659          osd.2           up   1.00000  1.00000
 5    hdd   0.97659          osd.5           up   1.00000  1.00000
 8    hdd   0.97659          osd.8           up   1.00000  1.00000
11    hdd   0.97659          osd.11          up   1.00000  1.00000

Any guidance on how I can fix that ?

Best regards .

Michel
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux