Re: 答复: Re: RBD read-ahead didn't improve 4K read performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
Maybe this could be interesting for you:

[Qemu-devel] [RFC PATCH 3/3] virtio-blk: introduce multiread
https://www.mail-archive.com/qemu-devel@xxxxxxxxxx/msg268718.html


Currently virtio-blk don't support merge request on read. (I think virtio-scsi is already doing it).


So, that's mean that seq 4k ios, are aggregated, and so  bigger and less ios are going to ceph.

So performance should improve.




----- Mail original ----- 

De: "duan xufeng" <duan.xufeng@xxxxxxxxxx> 
À: "Alexandre DERUMIER" <aderumier@xxxxxxxxx> 
Cc: "ceph-users" <ceph-users@xxxxxxxx>, "si dawei" <si.dawei@xxxxxxxxxx> 
Envoyé: Vendredi 21 Novembre 2014 09:21:49 
Objet: 答复: Re:  RBD read-ahead didn't improve 4K read performance 


Hi, 

I test in VM with fio, here is the config: 

[global] 
direct=1 
ioengine=aio 
iodepth=1 

[sequence read 4K] 
rw=read 
bs=4K 
size=1024m 
directory=/mnt 
filename=test 


sequence read 4K: (g=0): rw=read, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=1 
fio-2.1.3 
Starting 1 process 
sequence read 4K: Laying out IO file(s) (1 file(s) / 1024MB) 
^Cbs: 1 (f=1): [R] [18.0% done] [1994KB/0KB/0KB /s] [498/0/0 iops] [eta 07m:14s] 
fio: terminating on signal 2 

sequence read 4K: (groupid=0, jobs=1): err= 0: pid=1156: Fri Nov 21 12:32:53 2014 
read : io=187408KB, bw=1984.1KB/s, iops=496, runt= 94417msec 
slat (usec): min=22, max=878, avg=48.36, stdev=22.63 
clat (usec): min=1335, max=17618, avg=1956.45, stdev=247.26 
lat (usec): min=1371, max=17680, avg=2006.97, stdev=248.47 
clat percentiles (usec): 
| 1.00th=[ 1560], 5.00th=[ 1640], 10.00th=[ 1704], 20.00th=[ 1784], 
| 30.00th=[ 1848], 40.00th=[ 1896], 50.00th=[ 1944], 60.00th=[ 1992], 
| 70.00th=[ 2064], 80.00th=[ 2128], 90.00th=[ 2192], 95.00th=[ 2288], 
| 99.00th=[ 2448], 99.50th=[ 2640], 99.90th=[ 3856], 99.95th=[ 4256], 
| 99.99th=[ 9408] 
bw (KB /s): min= 1772, max= 2248, per=100.00%, avg=1986.55, stdev=85.76 
lat (msec) : 2=60.69%, 4=39.23%, 10=0.07%, 20=0.01% 
cpu : usr=1.92%, sys=2.98%, ctx=47125, majf=0, minf=28 
IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 
issued : total=r=46852/w=0/d=0, short=r=0/w=0/d=0 

Run status group 0 (all jobs): 
READ: io=187408KB, aggrb=1984KB/s, minb=1984KB/s, maxb=1984KB/s, mint=94417msec, maxt=94417msec 

Disk stats (read/write): 
sda: ios=46754/11, merge=0/10, ticks=91144/40, in_queue=91124, util=96.73% 


the 


the rados benchmark: 

# rados -p volumes bench 60 seq -b 4096 -t 1 
Total time run: 44.922178 
Total reads made: 24507 
Read size: 4096 
Bandwidth (MB/sec): 2.131 

Average Latency: 0.00183069 
Max latency: 0.004598 
Min latency: 0.001224 




			Re:  RBD read-ahead didn't improve 4K read performance 
	Alexandre DERUMIER 	收件人: 	duan xufeng 	
2014/11/21 14:51 
		抄送: 	si dawei, ceph-users 
	





Hi, 

I don't have tested yet rbd readhead, 
but maybe do you reach qemu limit. (by default qemu can use only 1thread/1core to manage ios, check you qemu cpu). 

Do you have some performance results ? how many iops ? 


but I have had 4x improvement in qemu-kvm, with virtio-scsi + num_queues + lasts kernel. 
(4k seq coalesced reads in qemu, was doing bigger iops to ceph). 

libvirt : <controller type='scsi' index='0' model='virtio-scsi' num_queues='8'/> 


Regards, 

Alexandre 
----- Mail original ----- 

De: "duan xufeng" <duan.xufeng@xxxxxxxxxx> 
À: "ceph-users" <ceph-users@xxxxxxxx> 
Cc: "si dawei" <si.dawei@xxxxxxxxxx> 
Envoyé: Vendredi 21 Novembre 2014 03:58:38 
Objet:  RBD read-ahead didn't improve 4K read performance 


hi, 

I upgraded CEPH to 0.87 for rbd readahead , but can't see any performance improvement in 4K seq read in the VM. 
How can I know if the readahead is take effect? 

thanks. 

ceph.conf 
[client] 
rbd_cache = true 
rbd_cache_size = 335544320 
rbd_cache_max_dirty = 251658240 
rbd_cache_target_dirty = 167772160 

rbd readahead trigger requests = 1 
rbd readahead max bytes = 4194304 
rbd readahead disable after bytes = 0 
-------------------------------------------------------- 
ZTE Information Security Notice: The information contained in this mail (and any attachment transmitted herewith) is privileged and confidential and is intended for the exclusive use of the addressee(s). If you are not an intended recipient, any disclosure, reproduction, distribution or other dissemination or use of the information contained is strictly prohibited. If you have received this mail in error, please delete it and notify us immediately. 

_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 


--------------------------------------------------------
ZTE Information Security Notice: The information contained in this mail (and any attachment transmitted herewith) is privileged and confidential and is intended for the exclusive use of the addressee(s).  If you are not an intended recipient, any disclosure, reproduction, distribution or other dissemination or use of the information contained is strictly prohibited.  If you have received this mail in error, please delete it and notify us immediately. 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux