Re: ceph cluster performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Mark,

The SSDs are http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/ssd/enterprise-sata-ssd/?sku=ST240FN0021 and the HDDs are http://www.seagate.com/internal-hard-drives/enterprise-hard-drives/hdd/constellation/?sku=ST91000640SS. 

The chasis is a "SiliconMechanics C602" - but I don't have the exact model. It's based on Supermicro, has 24 slots front and 2 in the back and a SAS expander. 

I did a fio test (raw partitions, 4M blocksize, ioqueue maxed out according to what the driver reports in dmesg). here are the results (filtered): 

Sequential: 
Run status group 0 (all jobs):
  WRITE: io=176952MB, aggrb=2879.0MB/s, minb=106306KB/s, maxb=191165KB/s, mint=60444msec, maxt=61463msec

Individually, the HDDs had best:worst 103:109 MB/s while the SSDs gave 153:189 MB/s 

Random: 
Run status group 0 (all jobs):
  WRITE: io=106868MB, aggrb=1727.2MB/s, minb=67674KB/s, maxb=106493KB/s, mint=60404msec, maxt=61875msec

Individually (best:worst) HDD 71:73 MB/s, SSD 68:101 MB/s (with only one out of 6 doing 101)

This is on just one of the osd servers.

Thanks,
Dinu


On Oct 30, 2013, at 6:38 PM, Mark Nelson <mark.nelson@xxxxxxxxxxx> wrote:

> On 10/30/2013 09:05 AM, Dinu Vlad wrote:
>> Hello,
>> 
>> I've been doing some tests on a newly installed ceph cluster:
>> 
>> # ceph osd create bench1 2048 2048
>> # ceph osd create bench2 2048 2048
>> # rbd -p bench1 create test
>> # rbd -p bench1 bench-write test --io-pattern rand
>> elapsed:   483  ops:   396579  ops/sec:   820.23  bytes/sec: 2220781.36
>> 
>> # rados -p bench2 bench 300 write --show-time
>> # (run 1)
>> Total writes made:      20665
>> Write size:             4194304
>> Bandwidth (MB/sec):     274.923
>> 
>> Stddev Bandwidth:       96.3316
>> Max bandwidth (MB/sec): 748
>> Min bandwidth (MB/sec): 0
>> Average Latency:        0.23273
>> Stddev Latency:         0.262043
>> Max latency:            1.69475
>> Min latency:            0.057293
>> 
>> These results seem to be quite poor for the configuration:
>> 
>> MON: dual-cpu Xeon E5-2407 2.2 GHz, 48 GB RAM, 2xSSD for OS
>> OSD: dual-cpu Xeon E5-2620 2.0 GHz, 64 GB RAM, 2xSSD for OS (on-board controller), 18 HDD 1TB 7.2K rpm SAS for OSD drives and 6 SSDs (SATA) for journal, attached to a LSI 9207-8i controller.
>> All servers have dual 10GE network cards, connected to a pair of dedicated switches. Each SSD has 3 10 GB partitions for journals.
> 
> Agreed, you should see much higher throughput with that kind of storage setup.  What brand/model SSDs are these?  Also, what brand and model of chassis?  With 24 drives and 8 SSDs I could push 2GB/s (no replication though) with a couple of concurrent rados bench processes going on our SC847A chassis, so ~550MB/s aggregate throughput for 18 drives and 6 SSDs is definitely on the low side.
> 
> I'm actually not too familiar with what the RBD benchmarking commands are doing behind the scenes.  Typically I've tested fio on top of a filesystem on RBD.
> 
>> 
>> Using ubuntu 13.04, ceph 0.67.4, XFS for backend storage. Cluster was installed using ceph-deploy. ceph.conf pretty much out of the box (diff from default follows)
>> 
>> osd_journal_size = 10240
>> osd mount options xfs = "rw,noatime,nobarrier,inode64"
>> osd mkfs options xfs = "-f -i size=2048"
>> 
>> [osd]
>> public network = 10.4.0.0/24
>> cluster network = 10.254.254.0/24
>> 
>> All tests were run from a server outside the cluster, connected to the storage network with 2x 10 GE nics.
>> 
>> I've done a few other tests of the individual components:
>> - network: avg. 7.6 Gbit/s (iperf, mtu=1500), 9.6 Gbit/s (mtu=9000)
>> - md raid0 write across all 18 HDDs - 1.4 GB/s sustained throughput
>> - fio SSD write (xfs, 4k blocks, directio): ~ 250 MB/s, ~55K IOPS
> 
> What you might want to try doing is 4M direct IO writes using libaio and a high iodepth to all drives (spinning disks and SSDs) concurrently and see how both the per-drive and aggregate throughput is.
> 
> With just SSDs, I've been able to push the 9207-8i up to around 3GB/s with Ceph writes (1.5GB/s if you don't count journal writes), but perhaps there is something interesting about the way the hardware is setup on your system.
> 
>> 
>> I'd appreciate any suggestion that might help improve the performance or identify a bottleneck.
>> 
>> Thanks
>> Dinu
>> 
>> 
>> 
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux