Mount of CephFS hangs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I am confronted with a persistent problem during mounting of the CephFS. I am using Ubuntu 16.04 and solely ceph-fuse. The CephFS gets mounted by muliple machines and very ofen (not always, but in most cases) the mount process hangs and does not continue. "df -h" also hangs and nothing happens. Everything seems to be up an running. On the MDS side, I have the following in the log:

2016-11-30 07:09:57.208736 7f680bc65700  0 -- 10.30.200.141:6800/1358 >> 10.30.216.130:0/3257360291 pipe(0x5572ceaa0800 sd=22 :6800 s=2 pgs=2 cs=1 l=0 c=0x5572c4843500).fault with nothing to send, going to standby
2016-11-30 07:10:27.833523 7f6812394700  0 log_channel(cluster) log [WRN] : 1 slow requests, 1 included below; oldest blocked for > 30.532515 secs
2016-11-30 07:10:27.833631 7f6812394700  0 log_channel(cluster) log [WRN] : slow request 30.532515 seconds old, received at 2016-11-30 07:09:57.300940: client_request(client.588036:6 setattr uid=3001 gid=3001 #10000000851 2016-11-30 07:09:57.302543) currently failed to xlock, waiting
2016-11-30 07:14:52.841056 7f6812394700  0 log_channel(cluster) log [INF] : closing stale session client.641683 10.30.216.130:0/3257360291 after 300.119493
2016-11-30 07:17:02.844691 7f6812394700  0 log_channel(cluster) log [INF] : closing stale session client.588036 10.30.216.130:0/1984817088 after 304.588537
2016-11-30 07:17:02.859557 7f6809740700  0 -- 10.30.200.141:6800/1358 >> 10.30.216.130:0/1984817088 pipe(0x5572ce891400 sd=23 :6800 s=0 pgs=0 cs=0 l=0 c=0x5572c4843080).accept we reset (peer sent cseq 2), sending RESETSESSION
2016-11-30 07:17:18.344852 7f6809740700  0 -- 10.30.200.141:6800/1358 >> 10.30.216.130:0/1984817088 pipe(0x5572ce891400 sd=23 :6800 s=2 pgs=4 cs=1 l=0 c=0x5572c4843080).reader missed message?  skipped from seq 0 to 114931623

It seems to be a network issue, but the network is up and running without any failuries.

What can I do to solve this issue and to make the mount process more reliable?

Thanks a lot,
Jens
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux