Re: Newbie question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I actually am looking for a similar answer. If 1 osd = 1 HDD, in dumpling it will relocate the data for me after the timeout which is great. If I just want to replace the osd with an unformated new HDD what is the procedure?

One method that has worked for me is to remove it out of the crush map then re add the osd drive to the cluster. This works but seems like a lot of overhead just to replace a single drive. Is there a better way to do this?


On Wed, Oct 2, 2013 at 8:10 AM, Andy Paluch <andy@xxxxxxxxxxx> wrote:
What happens when a drive goes bad in ceph and has to be replaced (at the physical level) . In the Raid world you pop out the bad disk and stick a new one in and the controller takes care of getting it back into the system. With what I've been reading so far, it probably going be a mess to do this with ceph  and involve a lot of low level linux tweaking to remove and replace the disk that failed. Not a big Linux guy so was wondering if anyone can point to any docs on how to recover from a bad disk in a ceph node.

Thanks


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Follow Me: @Scottix
http://about.me/scottix
Scottix@xxxxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux