On Wed, Jul 18, 2018 at 1:08 PM Nikola Ciprich <nikola.ciprich@xxxxxxxxxxx> wrote:
> Care to share your "bench-rbd" script (on pastebin or similar)?
sure, no problem.. it's so short I hope nobody will get offended if I paste it right
here :)
#!/bin/bash
#export LD_PRELOAD="/usr/lib64/libtcmalloc.so.4"
numjobs=8
pool=nvme
vol=xxx
time=30
opts="--randrepeat=1 --ioengine=rbd --direct=1 --numjobs=${numjobs} --gtod_reduce=1 --name=test --pool=${pool} --rbdname=${vol} --invalidate=0 --bs=4k --iodepth=64 --time_based --runtime=$time --group_reporting"
So that "--numjobs" parameter is what I was referring to when I said multiple jobs will cause a huge performance it. This causes fio to open the same image X images, so with (nearly) each write operation, the exclusive-lock is being moved from client-to-client. Instead of multiple jobs against the same image, you should use multiple images.
sopts="--randrepeat=1 --ioengine=rbd --direct=1 --numjobs=1 --gtod_reduce=1 --name=test --pool=${pool} --rbdname=${vol} --invalidate=0 --bs=256k --iodepth=64 --time_based --runtime=$time --group_reporting"
#fio $sopts --readwrite=read --output=rbd-fio-seqread.log
echo
#fio $sopts --readwrite=write --output=rbd-fio-seqwrite.log
echo
fio $opts --readwrite=randread --output=rbd-fio-randread.log
echo
fio $opts --readwrite=randwrite --output=rbd-fio-randwrite.log
echo
hope it's of some use..
n.
--
-------------------------------------
Ing. Nikola CIPRICH
LinuxBox.cz, s.r.o.
28. rijna 168, 709 00 Ostrava
tel.: +420 591 166 214
fax: +420 596 621 273
mobil: +420 777 093 799
www.linuxbox.cz
mobil servis: +420 737 238 656
email servis: servis@xxxxxxxxxxx
-------------------------------------
Jason
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com