Ceph luminous - throughput performance issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Is there anyone using DELL servers with PERC controllers willing to provide advise on configuring it for good throughput performance ?

I have 3 servers with 1 SSD and 3 HDD  each
All drives are Entreprise grade 

                Connector          : 00<Internal><Encl Pos 1 >: Slot 0
                Vendor Id          : TOSHIBA
                Product Id         : PX04SHB040
                State              : Online
                Disk Type          : SAS,Solid State Device
                Capacity           : 372.0 GB
                Power State        : Active

                Connector          : 00<Internal><Encl Pos 1 >: Slot 1
                Vendor Id          : TOSHIBA
                Product Id         : AL13SEB600
                State              : Online
                Disk Type          : SAS,Hard Disk Device
                Capacity           : 558.375 GB
                Power State        : Active


Created OSD with separate WAL(1 GB)  and DB (15 GB) partitions on SSD 

rados bench is abysmal   

The interesting part is that testing drives with fio is also pretty bad  - that is why I am thinking that my controller config might be the culprit  

See below results using various config

 Commands used 

 megacli -LDInfo -LALL -a0

fio --filename=/dev/sd[a-b]  --direct=1 --sync=1 --rw=write --bs=4k --numjobs=5 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test



SSD drive
Current Cache Policy: WriteThrough, ReadAheadNone, Cached, No Write Cache if Bad BBU
Jobs: 5 (f=5): [W(5)] [100.0% done] [0KB/125.2MB/0KB /s] [0/32.5K/0 iops] [eta 00m:00s]

Current Cache Policy: WriteThrough, ReadAheadNone, Direct, No Write Cache if Bad BBU
Jobs: 5 (f=5): [W(5)] [100.0% done] [0KB/224.8MB/0KB /s] [0/57.6K/0 iops] [eta 00m:00s]



HDD drive

Current Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU
Jobs: 5 (f=5): [W(5)] [100.0% done] [0KB/77684KB/0KB /s] [0/19.5K/0 iops] [eta 00m:00s]


Current Cache Policy: WriteBack, ReadAdaptive, Cached, No Write Cache if Bad BBU
Jobs: 5 (f=5): [W(5)] [100.0% done] [0KB/89036KB/0KB /s] [0/22.3K/0 iops] [eta 00m:00s]

rados bench -p rbd 120 write -t 64 -b 4096 --no-cleanup && rados bench -p rbd 120 -t 64 seq

Total time run:         120.009091
Total writes made:      630542
Write size:             4096
Object size:            4096
Bandwidth (MB/sec):     20.5239
Stddev Bandwidth:       2.43418
Max bandwidth (MB/sec): 37.0391
Min bandwidth (MB/sec): 15.9336
Average IOPS:           5254
Stddev IOPS:            623
Max IOPS:               9482
Min IOPS:               4079
Average Latency(s):     0.0121797
Stddev Latency(s):      0.0208528
Max latency(s):         0.428262
Min latency(s):         0.000859286


Total time run:       88.954502
Total reads made:     630542
Read size:            4096
Object size:          4096
Bandwidth (MB/sec):   27.6889
Average IOPS:         7088
Stddev IOPS:          1701
Max IOPS:             8923
Min IOPS:             1413
Average Latency(s):   0.00901481
Max latency(s):       0.946848
Min latency(s):       0.000286236

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux