Re: Redundant Power Supplies

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/30/2014 03:36 PM, Nick Fisk wrote:
> What’s everyone’s opinions on having redundant power supplies in your OSD
> nodes?
> 
>  
> 
> One part of me says let Ceph do the redundancy and plan for the hardware to
> fail, the other side says that they are probably worth having as they lessen
> the chance of losing a whole node.
> 
>  
> 
> Considering they can add £200-300 to a server, the cost can add up over a
> number of nodes.
> 
>  
> 
> My worst case scenario is where you have dual power feeds A and B. In this
> scenario if power feed B ever goes down (fuse/breaker maybe)  then suddenly
> half your cluster could disappear and start doing massive recovery
> operations. I guess this could be worked around by setting some sort of sub
> tree limit grouped by power feed.
> 
> 
> Thoughts?
> 

I did a deployment with single power supplies because of the reasoning
you mention.

Each rack (3 in total) is split into 3 zones, each zone has it's own switch.

In the rack there are 6 machines on powerfeed A together with a switch.
A set of machines on B with a switch and there also is a STS switch
which provides "powerfeed C".

Should a breaker trip in a cabinet we'll loose 6 machines at max. If
powerfeed A or B goes down datacenter wide we'll loose 1/3 of the cluster.

In the CRUSHMap we defined powerfeeds where we place our replicas over
the different powerfeeds.

mon_osd_down_out_subtree_limit has been set to "powerfeed" to prevent a
whole powerfeed from being marked as "out".

This way we saved about EUR 300,00 per machine. On 54 machines that was
quite a big save.

> 
> 
> 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux