Re: Recomendations for building 1PB RadosGW with Erasure Code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> -----Original Message-----
> From: Василий Ангапов [mailto:angapov@xxxxxxxxx]
> Sent: 16 February 2016 13:15
> To: Tyler Bishop <tyler.bishop@xxxxxxxxxxxxxxxxx>
> Cc: Nick Fisk <nick@xxxxxxxxxx>; <ceph-users@xxxxxxxxxxxxxx> <ceph-
> users@xxxxxxxxxxxxxx>
> Subject: Re:  Recomendations for building 1PB RadosGW with
> Erasure Code
> 
> 2016-02-16 17:09 GMT+08:00 Tyler Bishop
> <tyler.bishop@xxxxxxxxxxxxxxxxx>:
> > With ucs you can run dual server and split the disk.  30 drives per node.
> > Better density and easier to manage.
> I don't think I got your point. Can you please explain it in more details?

I think he means that the 60 bays can be zoned, so you end up with physically 1 JBOD split into two 30 logical JBOD's each connected to a different server. What this does to your failures domains is another question.

> 
> And again - is dual Xeon's power enough for 60-disk node and Erasure Code?

I would imagine yes, but you would mostly likely need to go for the 12-18core versions with a high clock. These are serious $$$$. I don't know at what point this becomes more expensive than 12 disk nodes with "cheap" Xeon-D's or Xeon E3's.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux