low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Title: low io with enterprise SSDs ceph luminous - can we expect more? [klartext]

Hi,


we're playing around with ceph but are not quite happy with the IOs.


3 node ceph / proxmox cluster with each:


LSI HBA 3008 controller

4 x MZILT960HAHQ/007 Samsung SSD

Transport protocol:   SAS (SPL-3)

40G fibre Intel 520 Network controller on Unifi Switch

Ping roundtrip to partner node is 0.040ms average.


Transport protocol:   SAS (SPL-3)


fio reports on a virtual machine with


--randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75


on average 5000 iops / write

on average 13000 iops / read



We're expecting more. :( any ideas or is that all we can expect?


money is not a problem for this test-bed, any ideas howto gain more IOS is greatly appreciated.


Thank you.


Stefan

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux