hanging nfsd requests on an RBD to NFS gateway

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Has anyone else experienced a problem with RBD-to-NFS gateways blocking
nfsd server requests when their ceph cluster has a placement group that
is not servicing I/O for some reason, eg. too few replicas or an osd
with slow request warnings?

We have an RBD-NFS gateway that stops responding to NFS clients
(interaction with RBD-backed NFS shares hang on the NFS client),
whenever our ceph cluster has some part of it in an I/O block
condition.   This issue only affects the ability of the nfsd processes
to serve requests to the client.  I can look at and access underlying
mounted RBD containers without issue, although they appear hung from the
NFS client side.   The gateway node load numbers spike to a number that
reflects the number of nfsd processes, but the system is otherwise
untaxed (unlike the case in a normal high os load, ie. i can type and
run commands with normal responsiveness.)

The behavior comes accross like there is some nfsd global lock that an
nfsd sets before requesting I/O from a backend device.  In the case
above, the I/O request hangs on one RBD image affected by the I/O block
caused by the problematic pg or OSD.   The nfsd request blocks on the
ceph I/O and because it has set a global lock, all other nfsd processes
are prevented from servicing requests to their clients.  The nfsd
processes are now all in the wait queue causing the load number on the
gateway system to spike. Once the Ceph I/O issues is resolved, the nfsd
I/O request completes and all service returns to normal.  The load on
the gateway drops to normal immediately and all NFS clients can again
interact with the nfsd processes.  Thoughout this time unaffected ceph
objects remain available to other clients, eg. OpenStack volumes.

Our RBD-NFS gateway is running Ubuntu 12.04.5 with kernel
3.11.0-15-generic.  The ceph version installed on this client is 0.72.2,
though I assume only the kernel resident RBD module matters.

Any thoughts or pointers appreciated.

~jpr
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux