Re: Replacing a defective OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07. sep. 2016 02:51, Vlad Blando wrote:
Hi,

I replaced a failed OSD and was trying to add it back to the pool, my
problem is that I am not detecting the physical disk. It looks like I
need to initialize it via the hardware raid before I can see it on the OS.

If I'm going to restart the said server so I can work on the RAID config
(Raid 0), what will be the behavior of the remaining 2 nodes? Will there
be a slowdown? Will there be backfilling? I want to minimize client impact.

Thanks.

​/vlad
ᐧ

if you do not want to do backfilling and recovery, but rather run slightly degraded while the node is down you could set the noout flag. as long as the cluster can operate with the osd's missing ok.
It's kind of ceph's "maintainance mode"

 # ceph osd set noout

remember to unset it when you are done, and all osd's are up/in again. you will not get HEALTH_OK while noout is set.


but a separate question is...
what kind of hardware controller do you have ? since most controllers allow you do edit config/add drives from within the OS using the controller's often propritary softeware. that you often have to download from the vendor's webpages.

Do you find your controller on this list ?
https://wiki.debian.org/LinuxRaidForAdmins

this controller software is often needed for troubleshooting, and can give status and be monitored as well.



kind regards
Ronny Aasen





_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux