Re: KVM/QEMU rbd read latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Currently I can reduce the latency with

- compile qemu to use jemalloc
- disabling rbd_cache  (or qemu cache=none)


disabling debug in /etc/ceph.conf on the client node


[global]
 debug asok = 0/0
 debug auth = 0/0
 debug buffer = 0/0
 debug client = 0/0
 debug context = 0/0
 debug crush = 0/0
 debug filer = 0/0
 debug filestore = 0/0
 debug finisher = 0/0
 debug heartbeatmap = 0/0
 debug journal = 0/0
 debug journaler = 0/0
 debug lockdep = 0/0
 debug mds = 0/0
 debug mds balancer = 0/0
 debug mds locker = 0/0
 debug mds log = 0/0
 debug mds log expire = 0/0
 debug mds migrator = 0/0
 debug mon = 0/0
 debug monc = 0/0
 debug ms = 0/0
 debug objclass = 0/0
 debug objectcacher = 0/0
 debug objecter = 0/0
 debug optracker = 0/0
 debug osd = 0/0
 debug paxos = 0/0
 debug perfcounter = 0/0
 debug rados = 0/0
 debug rbd = 0/0
 debug rgw = 0/0
 debug throttle = 0/0
 debug timer = 0/0
 debug tp = 0/0


With this, I can reach around 50-60k iops 4k with 1 disk and iothread enable.


and I have good hope than this new feature 
"RBD: Add support readv,writev for rbd"
http://marc.info/?l=ceph-devel&m=148726026914033&w=2

will help too, reducing copy (that's why I'm using jemalloc too)




----- Mail original -----
De: "Phil Lacroute" <lacroute@xxxxxxxxxxxxxxxxxx>
À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Jeudi 16 Février 2017 19:53:47
Objet:  KVM/QEMU rbd read latency

Hi, 

I am doing some performance characterization experiments for ceph with KVM guests, and I’m observing significantly higher read latency when using the QEMU rbd client compared to krbd. Is that expected or have I missed some tuning knobs to improve this? 

Cluster details: 
Note that this cluster was built for evaluation purposes, not production, hence the choice of small SSDs with low endurance specs. 
Client host OS: Debian, 4.7.0 kernel 
QEMU version 2.7.0 
Ceph version Jewel 10.2.3 
Client and OSD CPU: Xeon D-1541 2.1 GHz 
OSDs: 5 nodes, 3 SSDs each, one journal partition and one data partition per SSD, XFS data file system (15 OSDs total) 
Disks: DC S3510 240GB 
Network: 10 GbE, dedicated switch for storage traffic 
Guest OS: Debian, virtio drivers 

Performance testing was done with fio on raw disk devices using this config: 
ioengine=libaio 
iodepth=128 
direct=1 
size=100% 
rw=randread 
bs=4k 

Case 1: krbd, fio running on the raw rbd device on the client host (no guest) 
IOPS: 142k 
Average latency: 0.9 msec 

Case 2: krbd, fio running in a guest (libvirt config below) 
<disk type='file' device='disk'> 
<driver name='qemu' type='raw' cache='none'/> 
<source file='/dev/rbd0'/> 
<backingStore/> 
<target dev='vdb' bus='virtio'/> 
</disk> 
IOPS: 119k 
Average Latency: 1.1 msec 

Case 3: QEMU RBD client, fio running in a guest (libvirt config below) 
<disk type='network' device='disk'> 
<driver name='qemu'/> 
<auth username='app1'> 
<secret type='ceph' usage='app_pool'/> 
</auth> 
<source protocol='rbd' name='app/image1'/> 
<target dev='vdc' bus='virtio'/> 
</disk> 
IOPS: 25k 
Average Latency: 5.2 msec 

The question is why the test with the QEMU RBD client (case 3) shows 4 msec of additional latency compared the guest using the krbd-mapped image (case 2). 

Note that the IOPS bottleneck for all of these cases is the rate at which the client issues requests, which is limited by the average latency and the maximum number of outstanding requests (128). Since the latency is the dominant factor in average read throughput for these small accesses, we would really like to understand the source of the additional latency. 

Thanks, 
Phil 




_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux