Hi Rafael,
In the original email you mentioned 4M block size, seq read, but here it
looks like you are doing 4k writes? Can you clarify? If you are doing
4k direct sequential writes with iodepth=1 and are also using librbd
cache, please make sure that librbd is set to writeback mode in both
cases. RBD by default will not kick into WB mode until it sees a flush
request, and the librbd engine in fio doesn't issue one before a test is
started. It can be pretty easy to end up in a situation where writeback
cache is active on some tests but not others if you aren't careful. IE
If one of your tests was done after a flush and the other was not, you'd
likely see a dramatic difference in performance during this test.
You can avoid this by telling librbd to always use WB mode (at least
when benchmarking):
rbd cache writethrough until flush = false
Mark
On 09/20/2017 01:51 AM, Rafael Lopez wrote:
Hi Alexandre,
Yeah we are using filestore for the moment with luminous. With regards
to client, I tried both jewel and luminous librbd versions against the
luminous cluster - similar results.
I am running fio on a physical machine with fio rbd engine. This is a
snippet of the fio config for the runs (the complete jobfile adds
variations of read/write/block size/iodepth).
[global]
ioengine=rbd
clientname=cinder-volume
pool=rbd-bronze
invalidate=1
ramp_time=5
runtime=30
time_based
direct=1
[write-rbd1-4k-depth1]
rbdname=rbd-tester-fio
bs=4k
iodepth=1
rw=write
stonewall
[write-rbd2-4k-depth16]
rbdname=rbd-tester-fio-2
bs=4k
iodepth=16
rw=write
stonewall
Raf
On 20 September 2017 at 16:43, Alexandre DERUMIER <aderumier@xxxxxxxxx
<mailto:aderumier@xxxxxxxxx>> wrote:
Hi
so, you use also filestore on luminous ?
do you have also upgraded librbd on client ? (are you benching
inside a qemu machine ? or directly with fio-rbd ?)
(I'm going to do a lot of benchmarks in coming week, I'll post
results on mailing soon.)
----- Mail original -----
De: "Rafael Lopez" <rafael.lopez@xxxxxxxxxx
<mailto:rafael.lopez@xxxxxxxxxx>>
À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx
<mailto:ceph-users@xxxxxxxxxxxxxx>>
Envoyé: Mercredi 20 Septembre 2017 08:17:23
Objet: luminous vs jewel rbd performance
hey guys.
wondering if anyone else has done some solid benchmarking of jewel
vs luminous, in particular on the same cluster that has been
upgraded (same cluster, client and config).
we have recently upgraded a cluster from 10.2.9 to 12.2.0, and
unfortunately i only captured results from a single fio (librbd) run
with a few jobs in it before upgrading. i have run the same fio
jobfile many times at different times of the day since upgrading,
and been unable to produce a close match to the pre-upgrade (jewel)
run from the same client. one particular job is significantly slower
(4M block size, iodepth=1, seq read), up to 10x in one run.
i realise i havent supplied much detail and it could be dozens of
things, but i just wanted to see if anyone else had done more
quantitative benchmarking or had similar experiences. keep in mind
all we changed was daemons were restarted to use luminous code,
everything else exactly the same. granted it is possible that
some/all osds had some runtime config injected that differs from
now, but i'm fairly confident this is not the case as they were
recently restarted (on jewel code) after OS upgrades.
cheers,
Raf
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
<http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com>
--
*Rafael Lopez*
Research Devops Engineer
Monash University eResearch Centre
T: +61 3 9905 9118 <tel:%2B61%203%209905%209118>
M: +61 (0)427682670 <tel:%2B61%204%2027682%20670>
E: rafael.lopez@xxxxxxxxxx <mailto:rafael.lopez@xxxxxxxxxx>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com