Re: Fio rbd stalls during 4M reads

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



There's an issue in master branch temporarily that makes rbd reads
greater than the cache size hang (if the cache was on). This might be
that. (Jason is working on it: http://tracker.ceph.com/issues/9854)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Thu, Oct 23, 2014 at 5:09 PM, Mark Kirkwood
<mark.kirkwood@xxxxxxxxxxxxxxx> wrote:
> I'm doing some fio tests on Giant using fio rbd driver to measure
> performance on a new ceph cluster.
>
> However with block sizes > 1M (initially noticed with 4M) I am seeing
> absolutely no IOPS for *reads* - and the fio process becomes non
> interrupteable (needs kill -9):
>
> $ ceph -v
> ceph version 0.86-467-g317b83d (317b83dddd1a917f70838870b31931a79bdd4dd0)
>
> $ fio --version
> fio-2.1.11-20-g9a44
>
> $ fio read-busted.fio
> env-read-4M: (g=0): rw=read, bs=4M-4M/4M-4M/4M-4M, ioengine=rbd, iodepth=32
> fio-2.1.11-20-g9a44
> Starting 1 process
> rbd engine: RBD version: 0.1.8
> Jobs: 1 (f=1): [R(1)] [inf% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta
> 1158050441d:06h:58m:03s]
>
> This appears to be a pure fio rbd driver issue, as I can attach the relevant
> rbd volume to a vm and dd from it using 4M blocks no problem.
>
> Any ideas?
>
> Cheers
>
> Mark
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux