Re: Designating an OSD as a spare

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 06/21/2018 03:35 PM, Drew Weaver wrote:
> Yes,
> 
>  
> 
> Eventually however you would probably want to replace that physical disk
> that has died and sometimes with remote deployments it is nice to not
> have to do that instantly which is how enterprise arrays and support
> contracts have worked for decades.
> 
>  

There is no point in doing that with Ceph. You just provision those
disks as OSDs and make sure you don't fill up your cluster to a certain
level. If you want you can even set the triggers lower then X percent.

If a disks dies you don't have to take any action. Ceph rebalances and
we are back to HEALTH_OK. Now you can replace that dead disk when ever
you like.

I manage a few clusters with >1000 disks and they have disks dying
weekly. Once every X weeks somebody goes to the datacenter and swaps the
broken disks. In the mean time we run happily without those disks.

Wido

> 
> I understand your point from a purely technological standpoint but I was
> approaching it from more of a logistical standpoint.
> 
>  
> 
> I suppose just leaving a disk in the system that isn’t being used would
> be good enough as well.
> 
>  
> 
> Thanks.
> 
> -Drew
> 
>  
> 
>  
> 
> *From:* Paul Emmerich [mailto:paul.emmerich@xxxxxxxx]
> *Sent:* Thursday, June 21, 2018 9:16 AM
> *To:* Drew Weaver <drew.weaver@xxxxxxxxxx>
> *Cc:* ceph-users <ceph-users@xxxxxxxxxxxxxx>
> *Subject:* Re:  Designating an OSD as a spare
> 
>  
> 
> Spare disks are bad design. There is no point in having a disk that is
> not being used.
> 
> Ceph will automatically remove a dead disk after 15 minutes from the
> cluster, backfilling
> 
> the data onto other disks.
> 
>  
> 
> Paul
> 
>  
> 
> 2018-06-21 14:54 GMT+02:00 Drew Weaver <drew.weaver@xxxxxxxxxx
> <mailto:drew.weaver@xxxxxxxxxx>>:
> 
>     Does anyone know if it is possible to designate an OSD as a spare so
>     that if a disk dies in a host no administrative action needs to be
>     immediately taken to remedy the situation?
> 
>      
> 
>     Thanks,
> 
>     -Drew
> 
>      
> 
> 
>     _______________________________________________
>     ceph-users mailing list
>     ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> 
> -- 
> 
> Paul Emmerich
> 
> Looking for help with your Ceph cluster? Contact us at https://croit.io
> 
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io <http://www.croit.io>
> Tel: +49 89 1896585 90
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux