Re: All SSD Pool - Odd Performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
have done the test again in a cleaner way.

Same pool, same VM, different hosts (qemu 2.4 + qemu 2.2) but same hardware.
But only one run!

The biggest difference is due cache settings:

qemu2.4 cache=writethrough  iops=3823 bw=15294KB/s
qemu2.4 cache=writeback  iops=8837 bw=35348KB/s
qemu2.2 cache=writethrough  iops=2996 bw=11988KB/s
qemu2.2 cache=writeback  iops=7980 bw=31921KB/s

iothread does change anything, because only one disk is used.

Test:
io --time_based --name=benchmark --size=4G --filename=test.bin --ioengine=libaio --randrepeat=0 --iodepth=128 --direct=1 --invalidate=1 --verify=0 --verify_fatal=0 --numjobs=4 --rw=randwrite --blocksize=4k --group_reporting


Udo

On 22.11.2015 23:59, Udo Lembke wrote:
Hi Zoltan,
you are right ( but this was two running systems...).

I see also an big failure: "--filename=/mnt/test.bin" (use simply copy/paste without to much thinking :-( )
The root filesystem is not on ceph (on both servers).
So my measurements are not valid!!

I would do the measurements clean tomorow.


Udo


On 22.11.2015 14:29, Zoltan Arnold Nagy wrote:
It would have been more interesting if you had tweaked only one option as now we can’t be sure which changed had what impact… :-)


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux