Re: [a bit off-topic] Power usage estimation of hardware for Ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 12 Apr 2015 18:03:52 +0200 Francois Lafont wrote:

> Hi,
> 
> Christian Balzer wrote:
> 
> >> I'm not sure to well understand: the model that I indicated in the
> >> link above (page 2, model SSG-6027R-OSD040H in the table) already
> >> have hotswap bays in the back, for OS drives.
> >>
> > Yes, but that model is pre-configured:
> >  2x 2.5" 400GB SSDs, 10x 3.5" 4TB SATA3 HDDs
> >  Rear 2.5" Hot-swap OS drives (mirrored 80GB SSD)
> > 
> > What model SSDs and HDDs are those anyway?
> > 
> > Instead you could use the basically same thing:
> > http://www.supermicro.com.tw/products/system/2U/6028/SSG-6028R-E1CR12L.cfm
> > 
> > And put 12 HDDs (of your choice) in the front and 2 fast and durable
> > SSDs for journals (and OS) in the back.
> 
> Ok, thx for the link.
> 
> Sorry for this another question but: are they people that use raid 1
> software in the journals dedicated SSDs? For instance, put 3 journals of
> OSD 1, 2, 3 in SSD1 and create a raid 1 software between SSD1 and SSD2
> so that, if SSD1 crashes the OSD 1, 2, 3 are always alive. It seems to
> me that few people use raid 1 software between journal dedicated SSDs,
> Am I wrong? It could be a good idea to minimize the risk of loosing a
> set of OSDs when an SSD crashes. Of course, I imagine that I should
> decrease the journals per SSD ratio in this case.
> 
Simply put, a RAID1 of SSDs will require you to get twice as many SSDs as
otherwise needed. And most people don't want to spend that money.
In addition to that DC level SSDs tend to very reliable and your cluster
will have to be able to withstand losses like this anyway.
Finally using a RAID1 to protect against a SSD failure caused by it
running out of write cycles is inefficient unless you're using SSDs with
differing TBW endurance. Because if the SSDs are identical and have seen
exactly the same write load from the start, they're likely to fail at the
same time (or in the Intel DC S ones even guaranteed to be switched to
read-only).

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux