Re: Ceph read & write performance benchmark

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Jiangan,

Thank you for the links, they are very helpful. I am wondering whether your Ceph tuning configuration i safe for  a production environment.

Thanks

-- 
Howie C.

On Thursday, December 12, 2013 at 11:07 PM, jiangang duan wrote:



On Thu, Dec 12, 2013 at 4:22 PM, Wido den Hollander <wido@xxxxxxxx> wrote:
On 12/11/2013 09:13 PM, German Anders wrote:
Hi to all,

       I'm new to Ceph and i want to create a Cluster for Production
with HP ProLiant DL380p Gen8 servers, the idea is to use 4 of this
servers to use as OSD's, and then 3 x HP ProLiant DL320e Gen8 servers
for MON. The Data network would be on 10GbE switches and 1Gb the
management. Below is the description of each of the servers:


In this case I would run the monitors on the same machines, since a DL320 is very overpowered for a monitor in this cluster set up.

*HP ProLiant DL380p Gen8*:
2 x Intel Xeon E5-2630v2 @2.6Ghz (6-cores)
2 x 64GB RAM
2 x 450GB SAS 15K in RAID-1 configuration for the OS
2 x 100GB SSD in RAID-1 configuration for the Journals

I wouldn't use RAID-1. Short version: SSDs rarely fail and if they fail it is due to wearing out. If you use RAID-1 they'll fail at the same moment.

You better use one SSD per 4 OSDs, gives you better performance and reliability.

8 x 4TB SATA 7.2K to use as 8 x OSD's (32TB raw)
1 x HP Ethernet 10GbE 2-port 530SFP+ Adapter
1 x HP Ethernet 1Gb 2-port 332T Adapter

*HP ProLiant DL320e Gen8*:
1 x Intel Xeon E3-1240v2 @3.4Ghz (4-cores)
1 x 32GB RAM

Way to much memory for a monitor. 4GB ~ 8GB is more then enough.

2 x 450GB SAS 15K in RAID-1 configuration for the OS
2 x 1.2TB SAS 10K for Logs
1 x HP Ethernet 10GbE 2-port 530SFP+ Adapter

10Gbit isn't required, but that's up to you.

1 x HP Ethernet 1Gb 2-port 332T Adapter


I want to know if someone more or less had a similar configuration and
to know what are the performance numbers (some benchmarks) for reads and
writes, maybe also some iozone or bonnie++ outputs, with several
processes (1..10), and different block sizes.
Also if anybody had some recommendations or tips regarding the
configuration for performance. The filesystem to be used is XFS.


I assume you are going for 3x replication, so with writes you'll have about 1/3 of the I/O performance of all the disks.

A 7200RPM disk is capable of about 100 IOps, so that's the figure you calculate with.

Ceph performance is very complex, so one bonnie++ or iozone benchmark won't reflect the performance of a other Ceph setup.

Wido

I really appreciated the help.

Thanks in advance,

Best regards,

*German Anders*








_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux