Thank you for your reply, answers below.On 23 Jun 2015, at 13:15, Christian Balzer <chibi@xxxxxxx> wrote: 0.67.12 dumpling (newest git) I know it’s ancient :-)CFQ with what kernel, with what filesystem, on what type of OSD (HDD, HDD My test was done on the block device, not filesystem, on a SSD. I tested several scenarios but the most simple one is to run fio --filename=/dev/sda --direct=1 --sync=1 --rw=write --bs=4k --numjobs=1 --iodepth=1 --runtime=60 --time_based --group_reporting --name=test --ioengine=aio and fio --filename=/dev/sda --direct=1 --sync=1 --rw=randread --bs=32k --numjobs=1 --iodepth=8 --runtime=60 --time_based --group_reporting --name=test --ioengine=aio You will see the first fio IOPS drop to ~10. This will of course depend on the drive and this is also saturating the SATA 2 capacity I have on my test machine (which might be the real cause). I am still testing various combinations, different drives have different thresholds (some fall to the bottom only with 128k block size which is larger than my average IO on the drives - not accounting for backfills). There’s a point though where it just hits the bottom and no amount of cfq-tuning magic can help. If I run a fio job for synchronous writes and at the same time run a fioSetting the priority to what (level and type) on which process? ionice -c3 fio-for-read-test this sets the class to idle setting the priority to 7 but leaving in on best-effort helps, but not much (10 x 30 IOPs)
Thanks Jan
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com