Fio rbd stalls during 4M reads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I'm doing some fio tests on Giant using fio rbd driver to measure performance on a new ceph cluster.

However with block sizes > 1M (initially noticed with 4M) I am seeing absolutely no IOPS for *reads* - and the fio process becomes non interrupteable (needs kill -9):

$ ceph -v
ceph version 0.86-467-g317b83d (317b83dddd1a917f70838870b31931a79bdd4dd0)

$ fio --version
fio-2.1.11-20-g9a44

$ fio read-busted.fio
env-read-4M: (g=0): rw=read, bs=4M-4M/4M-4M/4M-4M, ioengine=rbd, iodepth=32
fio-2.1.11-20-g9a44
Starting 1 process
rbd engine: RBD version: 0.1.8
Jobs: 1 (f=1): [R(1)] [inf% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 1158050441d:06h:58m:03s]

This appears to be a pure fio rbd driver issue, as I can attach the relevant rbd volume to a vm and dd from it using 4M blocks no problem.

Any ideas?

Cheers

Mark
[global]
ioengine=rbd
clientname=admin
pool=rbd
rbdname=rbd-fio-test
invalidate=0

iodepth=32
nrfiles=1
runtime=120
direct=1
sync=1
unlink=1
numjobs=1
thread=0
disk_util=0

[env-read-4M]
bs=4M
rw=read

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux