Hi,
If device mounted is not coming up, you can replace with new disk and ceph will handle rebalancing the data.
Here are the steps if you would like to replace the failed disk with new one :
1. ceph osd out osd.110
2. Now remove this failed OSD from Crush Map , as soon as its removed from crush map , recovery process will start.
ceph osd crush remove osd.110
3.delete keyrings for that OSD and finally remove OSD
ceph auth del osd.110
ceph osd rm 110
4. Once recovery is done and ceph status is active+clean, remove the old drive and insert new drive say /dev/sdb
5 Now create osd using ceph-deploy: (or the way you added osds at first)
ceph-deploy osd create <node>:/dev/sdb --zap-disk
Thanks
Sahana
On Fri, Mar 20, 2015 at 12:10 PM, Jesus Chavez (jeschave) <jeschave@xxxxxxxxx> wrote:
there was a blackout and one of my osds remains down, I have noticed that the journal partition an data partion is not showed anymore so the device cannot mounted…
8 114 5241856 sdh28 128 3906249728 sdi8 129 3901005807 sdi18 130 5241856 sdi28 144 3906249728 sdj8 145 3901005807 sdj18 146 5241856 sdj28 192 3906249728 sdm8 176 3906249728 sdl8 177 3901005807 sdl18 178 5241856 sdl28 160 3906249728 sdk8 161 3901005807 sdk18 162 5241856 sdk2253 0 52428800 dm-0253 1 4194304 dm-1253 2 37588992 dm-2
the device is the /dev/sdm and the osd is the number 110, so what does that mean ? that I have lost everything in OSD 110?
Thanks
/dev/mapper/rhel-root 50G 4.4G 46G 9% /devtmpfs 126G 0 126G 0% /devtmpfs 126G 92K 126G 1% /dev/shmtmpfs 126G 11M 126G 1% /runtmpfs 126G 0 126G 0% /sys/fs/cgroup/dev/sda1 494M 165M 330M 34% /boot/dev/sdj1 3.7T 220M 3.7T 1% /var/lib/ceph/osd/ceph-80/dev/mapper/rhel-home 36G 49M 36G 1% /home/dev/sdg1 3.7T 256M 3.7T 1% /var/lib/ceph/osd/ceph-50/dev/sdd1 3.7T 320M 3.7T 1% /var/lib/ceph/osd/ceph-20/dev/sdc1 3.7T 257M 3.7T 1% /var/lib/ceph/osd/ceph-10/dev/sdi1 3.7T 252M 3.7T 1% /var/lib/ceph/osd/ceph-70/dev/sdl1 3.7T 216M 3.7T 1% /var/lib/ceph/osd/ceph-100/dev/sdh1 3.7T 301M 3.7T 1% /var/lib/ceph/osd/ceph-60/dev/sde1 3.7T 268M 3.7T 1% /var/lib/ceph/osd/ceph-30/dev/sdf1 3.7T 299M 3.7T 1% /var/lib/ceph/osd/ceph-40/dev/sdb1 3.7T 244M 3.7T 1% /var/lib/ceph/osd/ceph-0/dev/sdk1 3.7T 240M 3.7T 1% /var/lib/ceph/osd/ceph-90[root@capricornio ~]#
0 3.63 osd.0 up 110 3.63 osd.10 up 120 3.63 osd.20 up 130 3.63 osd.30 up 140 3.63 osd.40 up 150 3.63 osd.50 up 160 3.63 osd.60 up 170 3.63 osd.70 up 180 3.63 osd.80 up 190 3.63 osd.90 up 1100 3.63 osd.100 up 1110 3.63 osd.110 down 0
Jesus Chavez
SYSTEMS ENGINEER-C.SALES
jeschave@xxxxxxxxx
Phone: +52 55 5267 3146
Mobile: +51 1 5538883255
CCIE - 44433
Think before you print.
This email may contain confidential and privileged material for the sole use of the intended recipient. Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender by reply email and delete all copies of this message.
Please click here for Company Registration Information.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com