Re: OSD remains down

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

If device mounted is not coming up, you can replace with new disk and  ceph  will handle rebalancing the data.

Here are the steps if you would like to replace the failed disk with new one :

1. ceph osd out osd.110
2. Now remove this failed OSD from Crush Map , as soon as its removed from crush map , recovery process will start.
    ceph osd crush remove osd.110
3.delete keyrings for that OSD and finally remove OSD
   ceph auth del osd.110
   ceph osd rm 110
4. Once recovery is done  and ceph status is active+clean, remove the old drive and insert new drive say /dev/sdb
5 Now create osd using ceph-deploy: (or the way you added osds at first)
     ceph-deploy osd create <node>:/dev/sdb   --zap-disk

Thanks
Sahana
    


On Fri, Mar 20, 2015 at 12:10 PM, Jesus Chavez (jeschave) <jeschave@xxxxxxxxx> wrote:
there was a blackout and one of my osds remains down, I have noticed that the journal partition an data partion is not showed anymore so the device cannot mounted…


   8      114    5241856 sdh2
   8      128 3906249728 sdi
   8      129 3901005807 sdi1
   8      130    5241856 sdi2
   8      144 3906249728 sdj
   8      145 3901005807 sdj1
   8      146    5241856 sdj2
   8      192 3906249728 sdm
   8      176 3906249728 sdl
   8      177 3901005807 sdl1
   8      178    5241856 sdl2
   8      160 3906249728 sdk
   8      161 3901005807 sdk1
   8      162    5241856 sdk2
 253        0   52428800 dm-0
 253        1    4194304 dm-1
 253        2   37588992 dm-2


the device is the /dev/sdm and the osd is the number 110, so what does that mean ? that I have lost everything in OSD 110?

Thanks



/dev/mapper/rhel-root   50G  4.4G   46G   9% /
devtmpfs               126G     0  126G   0% /dev
tmpfs                  126G   92K  126G   1% /dev/shm
tmpfs                  126G   11M  126G   1% /run
tmpfs                  126G     0  126G   0% /sys/fs/cgroup
/dev/sda1              494M  165M  330M  34% /boot
/dev/sdj1              3.7T  220M  3.7T   1% /var/lib/ceph/osd/ceph-80
/dev/mapper/rhel-home   36G   49M   36G   1% /home
/dev/sdg1              3.7T  256M  3.7T   1% /var/lib/ceph/osd/ceph-50
/dev/sdd1              3.7T  320M  3.7T   1% /var/lib/ceph/osd/ceph-20
/dev/sdc1              3.7T  257M  3.7T   1% /var/lib/ceph/osd/ceph-10
/dev/sdi1              3.7T  252M  3.7T   1% /var/lib/ceph/osd/ceph-70
/dev/sdl1              3.7T  216M  3.7T   1% /var/lib/ceph/osd/ceph-100
/dev/sdh1              3.7T  301M  3.7T   1% /var/lib/ceph/osd/ceph-60
/dev/sde1              3.7T  268M  3.7T   1% /var/lib/ceph/osd/ceph-30
/dev/sdf1              3.7T  299M  3.7T   1% /var/lib/ceph/osd/ceph-40
/dev/sdb1              3.7T  244M  3.7T   1% /var/lib/ceph/osd/ceph-0
/dev/sdk1              3.7T  240M  3.7T   1% /var/lib/ceph/osd/ceph-90
[root@capricornio ~]#


0       3.63                    osd.0   up      1
10      3.63                    osd.10  up      1
20      3.63                    osd.20  up      1
30      3.63                    osd.30  up      1
40      3.63                    osd.40  up      1
50      3.63                    osd.50  up      1
60      3.63                    osd.60  up      1
70      3.63                    osd.70  up      1
80      3.63                    osd.80  up      1
90      3.63                    osd.90  up      1
100     3.63                    osd.100 up      1
110     3.63                    osd.110 down    0



Jesus Chavez

SYSTEMS ENGINEER-C.SALES

jeschave@xxxxxxxxx
Phone: +52 55 5267 3146
Mobile: +51 1 5538883255

CCIE - 44433


Cisco.com

        




  
Think before you print.

This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message.

Please click here for Company Registration Information.






_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux