SSD randwrite performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello!

I have cluster with 5 SSD drives as OSD backed by SSD journals, one per osd. One
osd per node.

Data drives is Samsung 850 EVO 1TB, journals are Samsung 850 EVO 250G, journal
partition is 24GB, data partition is 790GB. OSD nodes connected by 2x10Gbps
linux bonding for data/cluster network.

When doing random write with 4k blocks with direct=1, buffered=0,
iodepth=32..1024, ioengine=libaio from nova qemu virthost I can get no more than
9kiops. Randread is about 13-15 kiops.

Trouble is that randwrite not depends on iodepth. read, write can be up to
140kiops, randread up to 15 kiops. randwrite is always 2-9 kiops.

Ceph cluster is mixed of jewel and hammer, upgrading now to jewel. On Hammer I
got the same results.

All journals can do up to 32kiops with the same config for fio.

I am confused because EMC ScaleIO can do much more iops what is boring my boss
:)

-- 
WBR, Max A. Krasilnikov
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux