Re: ceph 0.86 : rbd_cache=true, iops a lot slower on randread 4K

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>>Could you please check whether the following commit is there in your branch or not ? 
>>
>>82175ec94acc89dc75da0154f86187fb2e4dbf5e 

I'm this debian repository
http://gitbuilder.ceph.com/ceph-deb-wheezy-x86_64-basic/ref/v0.86

Don't known how to check if the commit was inside or not.


But the good new,
I just tried on last master
http://gitbuilder.ceph.com/ceph-deb-wheezy-x86_64-basic/ref/master


and Indeed, the problem is fixed !


Thanks for help,

Alexandre


----- Mail original ----- 

De: "Somnath Roy" <Somnath.Roy@xxxxxxxxxxx> 
À: "Alexandre DERUMIER" <aderumier@xxxxxxxxx>, ceph-devel@xxxxxxxxxxxxxxx 
Envoyé: Lundi 20 Octobre 2014 08:18:48 
Objet: RE: ceph 0.86 : rbd_cache=true, iops a lot slower on randread 4K 

Hi Alexandre, 

Yes, it is related to the defect I filed. 
I think with librbd engine , direct =1 is a noop. It is going through librbd engine not through kernel. Whether it will use cache or not depends upon the rbd_cache flag. 
This is supposed to be fixed on the latest code but I didn't have time to test this out. 
It seems the fix is not there in 0.86. 
Could you please check whether the following commit is there in your branch or not ? 

82175ec94acc89dc75da0154f86187fb2e4dbf5e 

Thanks & Regards 
Somnath 

-----Original Message----- 
From: ceph-devel-owner@xxxxxxxxxxxxxxx [mailto:ceph-devel-owner@xxxxxxxxxxxxxxx] On Behalf Of Alexandre DERUMIER 
Sent: Sunday, October 19, 2014 11:02 PM 
To: ceph-devel@xxxxxxxxxxxxxxx 
Subject: ceph 0.86 : rbd_cache=true, iops a lot slower on randread 4K 

Hi, 

I'm not sure it's related to this bug 
http://tracker.ceph.com/issues/9513 

But, when with this fio-rbd benchmark 

[global] 
ioengine=rbd 
clientname=admin 
pool=test 
rbdname=test 
invalidate=0 
rw=randread 
bs=4k 
direct=1 
numjobs=8 
group_reporting=1 
size=10G 

[rbd_iodepth32] 
iodepth=32 



I have around 

40000iops (cpu bound on client) with rbd_cache=false 

vs 

13000iops (40%cpu usage on client) with rbd_cache=true 


(Note that it should be direct ios, so It should bypass the cache). 
Seem to be a lock or something like that, as the cpu usage is a lot lower too. 

Is it the expected behavior ? 
-- 
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html 

________________________________ 

PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies). 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux