Re: CEPH failure domain - power considerations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!

 Fri, May 29, 2020 at 09:58:58AM +0200, pr wrote: 

> Hans van den Bogert (hansbogert) writes:
> > I would second that, there's no winning in this case for your requirements
> > and single PSU nodes. If there were 3 feeds,  then yes; you could make an
> > extra layer in your crushmap much like you would incorporate a rack topology
> > in the crushmap.
> 
> 	I'm not fully up on coffee for today, so I haven't yet worked out why
> 	3 feeds would help ? To have a 'tie breaker' of sorts, with hosts spread
> 	across 3 rails ?

You can break up your setup on 3 pieces, so outage of one of them will not lead
to outage of more than 1/3 of your cluster. So your cluster will survive. All of
your switches, connected to 2 PDUs with separate PSU, will survive in case of
outage of one PDU/PSU/ATS.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux