Thanks for the numbers Shain. I'm new to ceph, I definitely like the technology. However I'm not sure how to calculate if the transfer numbers you mentioned would be considered "good". For example, assuming a single disk's rate is barely 50MB/s .. Then the 1175MB/s is merely the aggregate bandwidth of 24 disks. Since ceph writes twice for journaling, I'm willing to accept we're effectively utilizing 48 drives. This is still only two thirds of the available 72 disk bandwidth .. I'd like to better understand why we're seeing such numbers, and if they are typical/good. Thanks!
On Fri, Jan 17, 2014 at 7:03 PM, Shain Miley <SMiley@xxxxxxx> wrote:
Just an FYI...we have a Ceph cluster setup for archiving audio and video using the following Dell hardware:
6 x Dell R-720xd;64 GB of RAM; for OSD nodes
72 x 4TB SAS drives as OSD’s
3 x Dell R-420;32 GB of RAM; for MON/RADOSGW/MDS nodes
2 x Force10 S4810 switches
4 x 10 GigE LCAP bonded Intel cards
This provides us with about 260 TB of usable space. With rados bench we are able to get the following on some the pools we tested:
1 replica - 1175 MB/s
2 replicas - 850 MB/s
3 replicas - 625 MB/s
If we decide to build a second cluster in the future for rbd backed vm's, we will either be looking into the new ceph 'ssd tiering' options, or a little bit less dense Dell nodes for osd's using ssd's for the journals, in order to maximize performance.
Shain
Shain Miley | Manager of Systems and Infrastructure, Digital Media | smiley@xxxxxxx | 202.513.3649
________________________________________
From: ceph-users-bounces@xxxxxxxxxxxxxx [ceph-users-bounces@xxxxxxxxxxxxxx] on behalf of Lincoln Bryant [lincolnb@xxxxxxxxxxxx]
Sent: Thursday, January 16, 2014 1:10 PM
To: Cedric Lemarchand
Cc: ceph-users@xxxxxxxxxxxxxx
Subject: Re: Ceph / Dell hardware recommendation
For our ~400 TB Ceph deployment, we bought:
(2) R720s w/ dual X5660s and 96 GB of RAM
(1) 10Gb NIC (2 interfaces per card)
(4) MD1200s per machine
...and a boat load of 4TB disks!
In retrospect, I would almost certainly would have gotten more servers. During heavy writes we see the load spiking up to ~50 on Emperor and warnings about slow OSDs, but we clearly seem to be on the extreme with something like 60 OSDs per box :)
Cheers,
Lincoln
On Jan 16, 2014, at 4:09 AM, Cedric Lemarchand wrote:
>
> Le 16/01/2014 10:16, NEVEU Stephane a écrit :
>> Thank you all for comments,
>>
>> So to sum up a bit, it's a reasonable compromise to buy :
>> 2 x R720 with 2x Intel E5-2660v2, 2.2GHz, 25M Cache, 48Gb RAM, 2 x 146GB, SAS 6Gbps, 2.5-in, 15K RPM Hard Drive (Hot-plug) Flex Bay for OS and 24 x 1.2TB, SAS 6Gbps, 2.5in, 10K RPM Hard Drive for OSDs (journal located on each osd) and PERC H710p Integrated RAID Controller, 1GB NV Cache
>> ?
>> Or is it a better idea to buy 4 servers less powerful instead of 2 ?
> I think you are facing the well known trade off between price/performances/usable storage size.
>
> More servers less powerfull will give you better power computation and better iops by usable To, but will be more expensive. An extrapolation of that that would be to use a blade for each To => very powerful/very expensive.
>
>
> The choice really depend of the work load you need to handle, witch is not an easy thing to estimate.
>
> Cheers
>
> --
> Cédric
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com