Re: Ceph / Dell hardware recommendation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I guess I joined the mailing list at just the right time, since I'm just starting to size out a ceph cluster, and I was just starting to read about how best to size out the nodes.

You mention consider less dense nodes for OSD nodes....

Assuming you used nodes with similar CPU,RAM, etc, at what point do you think you hit the 'sweet spot'? Would you do 6 drives per osd? 4?

My first thought were some 4-bay servers (similar to what you described from a CPU/RAM standpoint), and putting 3x4TB stat drives in each, and one SSD for the journal).  But then I was wondering if a higher drive count per-chassis might be a better choice, hence my question above.

And then just to make it interesting...

Another thing I'm considering is that I have 20 or so servers that do various tasks, but which aren't heavily loaded. They are small 1U units, though, but I have one open sata in each - I could just drop a drive into each one and make each an OSD node, and really spread things out.  But is that better than building a ceph-specific cluster?  I don't have the faintest idea yet....anybody out there compared these options? Any thoughts?

Tom


On Jan 17, 2014, at 9:03 AM, Shain Miley <SMiley@xxxxxxx> wrote:

> Just an FYI...we have a Ceph cluster setup for archiving audio and video using the following Dell hardware:
> 
> 6 x Dell R-720xd;64 GB of RAM; for OSD nodes
> 72 x 4TB SAS drives as OSD’s
> 3 x Dell R-420;32 GB of RAM; for MON/RADOSGW/MDS nodes
> 2 x Force10 S4810 switches
> 4 x 10 GigE LCAP bonded Intel cards
> 
> This provides us with about 260 TB of usable space. With rados bench we are able to get the following on some the pools we tested:
> 
> 1 replica - 1175 MB/s
> 2 replicas - 850 MB/s
> 3 replicas - 625 MB/s
> 
> If we decide to build a second cluster in the future for rbd backed vm's, we will either be looking into the new ceph 'ssd tiering' options, or a little bit less dense Dell nodes for osd's using ssd's for the journals, in order to maximize performance.
> 
> Shain
> 
> 
> Shain Miley | Manager of Systems and Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649
> 
> ________________________________________
> From: ceph-users-bounces@xxxxxxxxxxxxxx [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of Lincoln Bryant [lincolnb@xxxxxxxxxxxx]
> Sent: Thursday, January 16, 2014 1:10 PM
> To: Cedric Lemarchand
> Cc: ceph-users@xxxxxxxxxxxxxx
> Subject: Re:  Ceph / Dell hardware recommendation
> 
> For our ~400 TB Ceph deployment, we bought:
>        (2) R720s w/ dual X5660s and 96 GB of RAM
>        (1) 10Gb NIC (2 interfaces per card)
>        (4) MD1200s per machine
>        ...and a boat load of 4TB disks!
> 
> In retrospect, I would almost certainly would have gotten more servers. During heavy writes we see the load spiking up to ~50 on Emperor and warnings about slow OSDs, but we clearly seem to be on the extreme with something like 60 OSDs per box :)
> 
> Cheers,
> Lincoln
> 
> On Jan 16, 2014, at 4:09 AM, Cedric Lemarchand wrote:
> 
>> 
>> Le 16/01/2014 10:16, NEVEU Stephane a écrit :
>>> Thank you all for comments,
>>> 
>>> So to sum up a bit, it's a reasonable compromise to buy :
>>> 2 x R720 with 2x Intel E5-2660v2, 2.2GHz, 25M Cache, 48Gb RAM, 2 x 146GB, SAS 6Gbps, 2.5-in, 15K RPM Hard Drive (Hot-plug) Flex Bay for OS and 24 x 1.2TB, SAS 6Gbps, 2.5in, 10K RPM Hard Drive for OSDs (journal located on each osd) and PERC H710p Integrated RAID Controller, 1GB NV Cache
>>> ?
>>> Or is it a better idea to buy 4 servers less powerful instead of 2 ?
>> I think you are facing the well known trade off between price/performances/usable storage size.
>> 
>> More servers less powerfull will give you better power computation and better iops by usable To, but will be more expensive. An extrapolation of that that would be to use a blade for each To => very powerful/very expensive.
>> 
>> 
>> The choice really depend of the work load you need to handle, witch is not an easy thing to estimate.
>> 
>> Cheers
>> 
>> --
>> Cédric
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux