Re: Ceph benchmarks

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 08/27/2012 03:47 PM, Sébastien Han wrote:
Hi community,


Hi!

For those of you who are interested, I performed several benchmarks of
RADOS and RBD on different types of hardware and use case.
You can find my results here:
http://www.sebastien-han.fr/blog/2012/08/26/ceph-benchmarks/

Hope it helps :)

Feel free to comment, critic... :)

A couple of thoughts:

1) With so few OSDs going from 1000 to 10000 pgs shouldn't make too much of a difference. It would be concerning if it did!

2) Were the commodity results with SSDs using replication of 3? Also, was that test with the flusher on or off? I'd hope that with 15k drives you'd see a bit better throughput with journals on the SSDs.

3) It would be interesting to try these tests without the raid1 and see if you can max out the bonded interface.

4) I think the R520 backplane is using SAS expanders like in the R515s we have. We've had some performance problems caused either by them or by something goofy going on with our H700 controllers.

5) rados bench tests with smaller requests could be interesting on 15k drives. I typically see about 1-2MB/s per OSD for 4k requests with 7200rpm SATA disks.

Mark


Cheers!
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux