Re: [performance] why rbd_aio_write latency increase from 4ms to 7.3ms after the same test

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Thank you, that make sense for testing, but i'm afraid not in my case.
Even i test on the volume that already test many times, the IOPS will not growing up 
again. Yeah, i mean, this VM is broken, IOPS of the VM will never growing up..

Thanks!


hzwulibin@xxxxxxxxx
 
From: Chen, Xiaoxi
Date: 2015-11-02 14:11
To: hzwulibin; ceph-devel; ceph-users
Subject: RE: [performance] why rbd_aio_write latency increase from 4ms to 7.3ms after the same test
Pre-allocated the volume by "DD" across the entire RBD before you do any performance test:).
 
In this case, you may want to re-create the RBD, pre-allocate and try again.
 
> -----Original Message-----
> From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-
> owner@xxxxxxxxxxxxxxx] On Behalf Of hzwulibin
> Sent: Monday, November 2, 2015 1:24 PM
> To: ceph-devel; ceph-users
> Subject: [performance] why rbd_aio_write latency increase from 4ms to
> 7.3ms after the same test
>
> Hi,
> same environment, after a test script, the io latency(get from sudo ceph --
> admin-daemon /run/ceph/guests/ceph-client.*.asok per dump) increase
> from about 4ms to 7.3ms
>
> qemu version: debian 2.1.2
> kernel:3.10.45-openstack-amd64
> system: debian 7.8
> ceph: 0.94.5
> VM CPU number: 4  (cpu MHz : 2599.998)
> VM memory size: 16GB
> 9 OSD storage servers, with 4 SSD OSD on each, total 36 OSDs.
>
> Test scripts in VM:
> # cat reproduce.sh
> #!/bin/bash
>
> times=20
> for((i=1;i<=$times;i++))
> do
>     tmpdate=`date "+%F-%T"`
>     echo
> "=======================$tmpdate($i/$times)===================
> ===="
>     tmp=$((i%2))
>     if [[ $tmp -eq 0 ]];then
>         echo "############### fio /root/vdb.cfg ###############"
>         fio /root/vdb.cfg
>     else
>         echo "############### fio /root/vdc.cfg ###############"
>         fio /root/vdc.cfg
>     fi
> done
>
>
> tmpdate=`date "+%F-%T"`
> echo "############### [$tmpdate] fio /root/vde.cfg ###############"
> fio /root/vde.cfg
>
>
> # cat vdb.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
>
> [disk01]
> filename=/dev/vdb
>
>
> # cat vdc.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
>
> [disk01]
> filename=/dev/vdc
>
> # cat vdd.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
>
> [disk01]
> filename=/dev/vdd
>
> # cat vde.cfg
> [global]
> rw=randwrite
> direct=1
> numjobs=64
> ioengine=sync
> bsrange=4k-4k
> runtime=180
> group_reporting
>
> [disk01]
> filename=/dev/vde
>
> After run the scripts reproduce.sh, the disks in the VM's IOPS cutdown from
> 12k to 5k, the latency increase from 4ms to 7.3ms.
>
> run steps:
> 1. create a VM
> 2. create four volumes and attatch to the VM 3. sh reproduce.sh 4. in the
> runtime of  reproduce.sh, run "fio vdd.cfg" or "fio vde.cfg" to checkt the
> performance
>
> After reproduce.sh finished, performance down.
>
>
> Anyone has the same problem or has some ideas about this?
>
> Thanks!
> --------------
> hzwulibin
> 2015-11-02
>  {.n +       +%  lzwm  b 맲  r  yǩ ׯzX    ܨ}   Ơz &j:+v        zZ+  +zf   h   ~    i   z  w   ?
> & )ߢf
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux