Re: How to replace a failed OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Let say disk /dev/sdb failed on node nodeA. I would hot remove it, plug a new one and

ceph-deploy osd create nodeA:/dev/sdb

There is more context about how this is actually managed by ceph and the operating system  in http://dachary.org/?p=2428 Fully automated disks life cycle in a Ceph cluster

Cheers

On 20/11/2013 10:27, Robert van Leeuwen wrote:
Hi,

What is the easiest way to replace a failed disk / OSD.
It looks like the documentation here is not really compatible with ceph_deploy:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/

It is talking about adding stuff to the ceph.conf while ceph_deploy works in a different way.
(I've tried it without adding to ceph.conf and that obviously did not work)

Is there a easy way to replace a single failed OSD which has been deployed with ceph_deploy?
You could remove the OSD and add a new one but I would prefer to just reuse the current config / OSD numbers.
Basically I would like to do a partition/format and some ceph commands to get stuff working again...

Thx,
Robert van Leeuwen
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Loïc Dachary, Artisan Logiciel Libre
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux