Re: Interested in Ceph, but have performance questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We easily see line rate sequential io of most disk.

I would say that 150GB/s with 40G networking and a minimum of 20 host is no problem.

 

 http://static.beyondhosting.net/img/bh-small.png

Tyler Bishop
Chief Technical Officer
513-299-7108 x10

Tyler.Bishop@xxxxxxxxxxxxxxxxx

If you are not the intended recipient of this transmission you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.

 



From: "Nick Fisk" <nick@xxxxxxxxxx>
To: "Gerald Spencer" <ger.spencer3@xxxxxxxxx>, ceph-users@xxxxxxxxxxxxxx
Sent: Thursday, September 29, 2016 11:04:45 AM
Subject: Re: Interested in Ceph, but have performance questions

Hi Gerald,

 

I would say it’s definitely possible. I would make sure you invest in the networking to make sure you have enough bandwidth and choose disks based on performance rather than capacity. Either lots of lower capacity disks or SSD’s would be best. The biggest challenge may be around the client interface (ie block,object,file) and if you can get it to create the parallelism required to drive the underlying RADOS cluster.

 

With my 60 disk cluster I can max out a 10G Nic with both read and writes. Ceph’s performance will increase with scale, so I don’t see why with 40G networking those figures wouldn’t be achievable.

 

Nick

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Gerald Spencer
Sent: 29 September 2016 15:38
To: ceph-users@xxxxxxxxxxxxxx
Subject: Interested in Ceph, but have performance questions

 

Greetings new world of Ceph,

 

Long story short, at work we perform high throughput volumetric imaging and create a decent chunk of data per machine. We are about to bring the next generation of our system online and the IO requirements will outpace our current storage solution (jbod using zfs on Linux). We are currently searching for a template-able scale out solution that we can add as we bring each new system online starting in a few months. There are several quotes floating around from all of the big players, but the buy in on hardware and software is unsettling as they are a hefty chunk of change. 

 

The current performance we are currently estimating is per machine:

- simultaneous 30Gbps read and 30Gbps write

- 180 TB capacity (roughly a two day buffer into a public cloud)

 

 

So our question is: are these types of performances possible using Ceph? I haven't found any benchmarks of this nature beyond 

Which claims 150GB/s? I think perhaps they meant 150Gb/s (150 1Gbps clients). 

 

Cheers,

Gerald Spencer



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux