Re: Recomendations for building 1PB RadosGW with Erasure Code

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm using 2x replica on that pool for storing rbd volumes.   Our workload is pretty heavy, id imagine objects an ec would be light in comparison.

 

 http://static.beyondhosting.net/img/bh-small.png

Tyler Bishop
Chief Technical Officer
513-299-7108 x10

Tyler.Bishop@xxxxxxxxxxxxxxxxx

If you are not the intended recipient of this transmission you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.

 



From: "John Hogenmiller" <john@xxxxxxxxxxxxxxx>
To: "Tyler Bishop" <tyler.bishop@xxxxxxxxxxxxxxxxx>
Cc: "Nick Fisk" <nick@xxxxxxxxxx>, ceph-users@xxxxxxxxxxxxxx
Sent: Wednesday, February 17, 2016 7:50:11 AM
Subject: Re: Recomendations for building 1PB RadosGW with Erasure Code

Tyler,
E5-2660 V2 is a 10-core, 2.2Ghz, giving you roughly 44Ghz or 0.78Ghz per OSD.  That seems to fall in line with Nick's "golden rule" or 0.5Ghz - 1Ghz per OSD.

Are you doing EC or Replication? If EC, what profile?  Could you also provide an average of CPU utilization?  

I'm still researching, but so far, the ratio seems to be pretty realistic.

-John

On Tue, Feb 16, 2016 at 9:22 AM, Tyler Bishop <tyler.bishop@xxxxxxxxxxxxxxxxx> wrote:
We use dual E5-2660 V2 with 56 6T and performance has not been an issue.  It will easily saturate the 40G interfaces and saturate the spindle io.

And yes, you can run dual servers attached to 30 disk each.  This gives you lots of density.  Your failure domain will remain as individual servers.  The only thing shared is the quad power supplies.

Tyler Bishop
Chief Technical Officer
513-299-7108 x10



Tyler.Bishop@xxxxxxxxxxxxxxxxx


If you are not the intended recipient of this transmission you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.

----- Original Message -----
From: "Nick Fisk" <nick@xxxxxxxxxx>
To: "Василий Ангапов" <angapov@xxxxxxxxx>, "Tyler Bishop" <tyler.bishop@xxxxxxxxxxxxxxxxx>
Cc: ceph-users@xxxxxxxxxxxxxx
Sent: Tuesday, February 16, 2016 8:24:33 AM
Subject: RE: Recomendations for building 1PB RadosGW with Erasure Code

> -----Original Message-----
> From: Василий Ангапов [mailto:angapov@xxxxxxxxx]
> Sent: 16 February 2016 13:15
> To: Tyler Bishop <tyler.bishop@xxxxxxxxxxxxxxxxx>
> Cc: Nick Fisk <nick@xxxxxxxxxx>; <ceph-users@xxxxxxxxxxxxxx> <ceph-
> users@xxxxxxxxxxxxxx>
> Subject: Re: Recomendations for building 1PB RadosGW with
> Erasure Code
>
> 2016-02-16 17:09 GMT+08:00 Tyler Bishop
> <tyler.bishop@xxxxxxxxxxxxxxxxx>:
> > With ucs you can run dual server and split the disk.  30 drives per node.
> > Better density and easier to manage.
> I don't think I got your point. Can you please explain it in more details?

I think he means that the 60 bays can be zoned, so you end up with physically 1 JBOD split into two 30 logical JBOD's each connected to a different server. What this does to your failures domains is another question.

>
> And again - is dual Xeon's power enough for 60-disk node and Erasure Code?

I would imagine yes, but you would mostly likely need to go for the 12-18core versions with a high clock. These are serious $$$$. I don't know at what point this becomes more expensive than 12 disk nodes with "cheap" Xeon-D's or Xeon E3's.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux