Hi Stefan, Am 13.01.20 um 17:09 schrieb Stefan Bauer: > Hi, > > > we're playing around with ceph but are not quite happy with the IOs. > > > 3 node ceph / proxmox cluster with each: > > > LSI HBA 3008 controller > > 4 x MZILT960HAHQ/007 Samsung SSD > > Transport protocol: SAS (SPL-3) > > 40G fibre Intel 520 Network controller on Unifi Switch > > Ping roundtrip to partner node is 0.040ms average. > > > Transport protocol: SAS (SPL-3) > > > fio reports on a virtual machine with > > > --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test > --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw > --rwmixread=75 > > > on average 5000 iops / write > > on average 13000 iops / read > > > > We're expecting more. :( any ideas or is that all we can expect? > > > money is *not* a problem for this test-bed, any ideas howto gain more > IOS is greatly appreciated. this has something todo with the firmware and how the manufacturer handles syncs / flushes. Intel just ignores sync / flush commands for drives which have a capacitor. Samsung does not. The problem is that Ceph sends a lot of flush commands which slows down drives without capacitor. You can make linux to ignore those userspace requests with the following command: echo "temporary write through" > /sys/block/sdX/device/scsi_disk/*/cache_type Greets, Stefan Priebe Profihost AG > Thank you. > > > Stefan > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com