Re: Large storage nodes - best practices

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 8/5/2013 12:51 PM, Brian Candler wrote:
On 05/08/2013 17:15, Mike Dawson wrote:

Short answer: Ceph generally is used with multiple OSDs per node. One
OSD per storage drive with no RAID is the most common setup. At 24- or
36-drives per chassis, there are several potential bottlenecks to
consider.

Mark Nelson, the Ceph performance guy at Inktank, has published
several articles you should consider reading. A few of interest are
[0], [1], and [2]. The last link is a 5-part series.

Yep, I saw [0] and [1]. He tries a 6-disk RAID0 array and generally gets
lower throughput than 6 x JBOD disks (although I think he's using the
controller RAID0 functionality, rather than mdraid).

AFAICS he has a 36-disk chassis but only runs tests with 6 disks, which
is a shame as it would be nice to know which other bottleneck you could
hit first with this type of setup.

The third link I sent shows Mark's results with 24 spinners and 8 SSDs for journals. Specifically read:

http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-1-introduction-and-rados-bench/#setup

Florian Haas has also published some thoughts on bottenecks:

http://www.hastexo.com/resources/hints-and-kinks/solid-state-drives-and-ceph-osd-journals


Also, note that there is on-going work to add erasure coding as a
optional backend (as opposed to the current replication scheme). If
you prioritize bulk storage over performance, you may be interested in
following the progress [3], [4], and [5].

[0]:
http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/

[1]:
http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/

[2]:
http://ceph.com/performance-2/ceph-cuttlefish-vs-bobtail-part-1-introduction-and-rados-bench/

[3]:
http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/Erasure_encoding_as_a_storage_backend

[4]:
http://wiki.ceph.com/01Planning/02Blueprints/Dumpling/Erasure_encoding_as_a_storage_backend

[5]: http://www.inktank.com/about-inktank/roadmap/

Thank you - erasure coding is very much of interest for the
archival-type storage I'm looking at. However your links [3] and [4] are
identical, did you mean to link to another one?

Oops.

http://wiki.ceph.com/01Planning/02Blueprints/Emperor/Erasure_coded_storage_backend_%28step_2%29



Cheers,

Brian.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux