Re: 10/14/2014 Weekly Ceph Performance Meeting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 2) in qemu, it's impossible to reach more than around 7000iops with 1 disk. (maybe is it also related to cpu or threads number). 
> I have also try with the new qemu iothread/dataplane feature, but it doesn't help. 
>>If its 1 volume, does adding another volume on the same VM help?

>>>As far I remember, yes . I'll test again to confirm.

I have done test, It's scaling with multiple virtio disk on multiple rbd volume.

(Not sure, but maybe it's related to iodepth bug showed in this meeting in intel slides ?)


----- Mail original ----- 

De: "Alexandre DERUMIER" <aderumier@xxxxxxxxx> 
À: "Mark Nelson" <mark.nelson@xxxxxxxxxxx> 
Cc: ceph-devel@xxxxxxxxxxxxxxx 
Envoyé: Mercredi 15 Octobre 2014 16:42:04 
Objet: Re: 10/14/2014 Weekly Ceph Performance Meeting 

>>Sure! Please feel free to add this or other topics that are 
>>useful/interesting to the etherpad. Please include your name though so 
>>we know who's brought it up. Even if we don't get to everything it will 
>>provide useful topics for the subsequent weeks. 

Ok,great,I'll do it. 

> 
> Currently I see 2 performance problems with librbd: 
> 
> 1) The cpu usage is quite huge. (I'm cpu bound with 8cores CPU E5-2603 v2 @ 1.80GHz, with 40000iops 4k read using fio-rbd) 

>>Interesting. Have you taken a look with perf or other tools to see 
>>where time is being spent? 

Not yet, but I can try to do it, I'll have time next week. 



> 
> 2) in qemu, it's impossible to reach more than around 7000iops with 1 disk. (maybe is it also related to cpu or threads number). 
> I have also try with the new qemu iothread/dataplane feature, but it doesn't help. 

>>1 disk meaning 1 OSD, or 1 disk meaning 1 volume on a VM? 
yes, 1 disk = 1 volume on VM 

>>If its 1 volume, does adding another volume on the same VM help? 

As far I remember, yes . I'll test again to confirm. 

Note that when benching with fio-rbd, I need to increase the client number too. 

(1client - queue depth 32: +- 8000iops 
2clients - queue depth 32: +- 16000iops 
.... 
) 
So maybe it's related 



>>I'm not familiar with the new qemu options, so that would be good to discuss at 
the meeting too! 

The dataplane/iothread feature allow virtio disk to reach around 1.000.000 iops vs 100.000 iops without dataplane 
http://www.linux-kvm.org/wiki/images/1/17/Kvm-forum-2013-Effective-multithreading-in-QEMU.pdf 

Syntax to enable it: 
qemu -object iothread,id=iothread0 -device virtio-blk-pci,iothread=iothread0,.... 



Regards, 

Alexandre 

----- Mail original ----- 

De: "Mark Nelson" <mark.nelson@xxxxxxxxxxx> 
À: "Alexandre DERUMIER" <aderumier@xxxxxxxxx>, "Mark Nelson" <mark.nelson@xxxxxxxxxxx> 
Cc: ceph-devel@xxxxxxxxxxxxxxx 
Envoyé: Mercredi 15 Octobre 2014 15:26:20 
Objet: Re: 10/14/2014 Weekly Ceph Performance Meeting 

On 10/15/2014 01:22 AM, Alexandre DERUMIER wrote: 
> Hi, 
> 
> about performance, maybe could it be great to also include client side performance ? 

Sure! Please feel free to add this or other topics that are 
useful/interesting to the etherpad. Please include your name though so 
we know who's brought it up. Even if we don't get to everything it will 
provide useful topics for the subsequent weeks. 

> 
> Currently I see 2 performance problems with librbd: 
> 
> 1) The cpu usage is quite huge. (I'm cpu bound with 8cores CPU E5-2603 v2 @ 1.80GHz, with 40000iops 4k read using fio-rbd) 

Interesting. Have you taken a look with perf or other tools to see 
where time is being spent? 

> 
> 2) in qemu, it's impossible to reach more than around 7000iops with 1 disk. (maybe is it also related to cpu or threads number). 
> I have also try with the new qemu iothread/dataplane feature, but it doesn't help. 

1 disk meaning 1 OSD, or 1 disk meaning 1 volume on a VM? If its 1 
volume, does adding another volume on the same VM help? I'm not 
familiar with the new qemu options, so that would be good to discuss at 
the meeting too! 

> 
> 
> 
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux