Re: luminous vs jewel rbd performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



ok, thanks.

I'll try to do same bench in coming week, I'll you in touch with results.


----- Mail original -----
De: "Rafael Lopez" <rafael.lopez@xxxxxxxxxx>
À: "aderumier" <aderumier@xxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Mercredi 20 Septembre 2017 08:51:22
Objet: Re:  luminous vs jewel rbd performance

Hi Alexandre, 
Yeah we are using filestore for the moment with luminous. With regards to client, I tried both jewel and luminous librbd versions against the luminous cluster - similar results. 

I am running fio on a physical machine with fio rbd engine. This is a snippet of the fio config for the runs (the complete jobfile adds variations of read/write/block size/iodepth). 

[global] 
ioengine=rbd 
clientname=cinder-volume 
pool=rbd-bronze 
invalidate=1 
ramp_time=5 
runtime=30 
time_based 
direct=1 

[write-rbd1-4k-depth1] 
rbdname=rbd-tester-fio 
bs=4k 
iodepth=1 
rw=write 
stonewall 

[write-rbd2-4k-depth16] 
rbdname=rbd-tester-fio-2 
bs=4k 
iodepth=16 
rw=write 
stonewall 

Raf 

On 20 September 2017 at 16:43, Alexandre DERUMIER < [ mailto:aderumier@xxxxxxxxx | aderumier@xxxxxxxxx ] > wrote: 


Hi 

so, you use also filestore on luminous ? 

do you have also upgraded librbd on client ? (are you benching inside a qemu machine ? or directly with fio-rbd ?) 



(I'm going to do a lot of benchmarks in coming week, I'll post results on mailing soon.) 



----- Mail original ----- 
De: "Rafael Lopez" < [ mailto:rafael.lopez@xxxxxxxxxx | rafael.lopez@xxxxxxxxxx ] > 
À: "ceph-users" < [ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] > 
Envoyé: Mercredi 20 Septembre 2017 08:17:23 
Objet:  luminous vs jewel rbd performance 

hey guys. 
wondering if anyone else has done some solid benchmarking of jewel vs luminous, in particular on the same cluster that has been upgraded (same cluster, client and config). 

we have recently upgraded a cluster from 10.2.9 to 12.2.0, and unfortunately i only captured results from a single fio (librbd) run with a few jobs in it before upgrading. i have run the same fio jobfile many times at different times of the day since upgrading, and been unable to produce a close match to the pre-upgrade (jewel) run from the same client. one particular job is significantly slower (4M block size, iodepth=1, seq read), up to 10x in one run. 

i realise i havent supplied much detail and it could be dozens of things, but i just wanted to see if anyone else had done more quantitative benchmarking or had similar experiences. keep in mind all we changed was daemons were restarted to use luminous code, everything else exactly the same. granted it is possible that some/all osds had some runtime config injected that differs from now, but i'm fairly confident this is not the case as they were recently restarted (on jewel code) after OS upgrades. 

cheers, 
Raf 

_______________________________________________ 
ceph-users mailing list 
[ mailto:ceph-users@xxxxxxxxxxxxxx | ceph-users@xxxxxxxxxxxxxx ] 
[ http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com | http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com ] 







-- 
Rafael Lopez 
Research Devops Engineer 
Monash University eResearch Centre 

T: [ tel:%2B61%203%209905%209118 | +61 3 9905 9118 ] 
M: [ tel:%2B61%204%2027682%20670 | +61 (0)427682670 ] 
E: [ mailto:rafael.lopez@xxxxxxxxxx | rafael.lopez@xxxxxxxxxx ] 


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux