Re: Do you ever encountered a similar deadlock cephfs stack?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




> On Oct 22, 2018, at 23:06, ? ? <Mr.liuxuan@xxxxxxxxxxx> wrote:
> 
>  
> Hello:
>  Do you ever encountered a similar deadlock cephfs stack?
>  
> [Sat Oct 20 15:11:40 2018] INFO: task nfsd:27191 blocked for more than 120 seconds.
> [Sat Oct 20 15:11:40 2018]       Tainted: G           OE  ------------   4.14.0-49.el7.centos.x86_64 #1
> [Sat Oct 20 15:11:40 2018] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> [Sat Oct 20 15:11:40 2018] nfsd            D    0 27191      2 0x80000080
> [Sat Oct 20 15:11:40 2018] Call Trace:
> [Sat Oct 20 15:11:40 2018]  __schedule+0x28d/0x880
> [Sat Oct 20 15:11:40 2018]  schedule+0x36/0x80
> [Sat Oct 20 15:11:40 2018]  rwsem_down_write_failed+0x20d/0x380
> [Sat Oct 20 15:11:40 2018]  ? ip_finish_output2+0x15d/0x390
> [Sat Oct 20 15:11:40 2018]  call_rwsem_down_write_failed+0x17/0x30
> [Sat Oct 20 15:11:40 2018]  down_write+0x2d/0x40
> [Sat Oct 20 15:11:40 2018]  ceph_write_iter+0x101/0xf00 [ceph]
> [Sat Oct 20 15:11:40 2018]  ? __ceph_caps_issued_mask+0x1ed/0x200 [ceph]
> [Sat Oct 20 15:11:40 2018]  ? nfsd_acceptable+0xa3/0xe0 [nfsd]
> [Sat Oct 20 15:11:40 2018]  ? exportfs_decode_fh+0xd2/0x3e0
> [Sat Oct 20 15:11:40 2018]  ? nfsd_proc_read+0x1a0/0x1a0 [nfsd]
> [Sat Oct 20 15:11:40 2018]  do_iter_readv_writev+0x10b/0x170
> [Sat Oct 20 15:11:40 2018]  do_iter_write+0x7f/0x190
> [Sat Oct 20 15:11:40 2018]  vfs_iter_write+0x19/0x30
> [Sat Oct 20 15:11:40 2018]  nfsd_vfs_write+0xc6/0x360 [nfsd]
> [Sat Oct 20 15:11:40 2018]  nfsd4_write+0x1b8/0x260 [nfsd]
> [Sat Oct 20 15:11:40 2018]  ? nfsd4_encode_operation+0x13f/0x1c0 [nfsd]
> [Sat Oct 20 15:11:40 2018]  nfsd4_proc_compound+0x3e0/0x810 [nfsd]
> [Sat Oct 20 15:11:40 2018]  nfsd_dispatch+0xc9/0x2f0 [nfsd]
> [Sat Oct 20 15:11:40 2018]  svc_process_common+0x385/0x710 [sunrpc]
> [Sat Oct 20 15:11:40 2018]  svc_process+0xfd/0x1c0 [sunrpc]
> [Sat Oct 20 15:11:40 2018]  nfsd+0xf3/0x190 [nfsd]
> [Sat Oct 20 15:11:40 2018]  kthread+0x109/0x140
> [Sat Oct 20 15:11:40 2018]  ? nfsd_destroy+0x60/0x60 [nfsd]
> [Sat Oct 20 15:11:40 2018]  ? kthread_park+0x60/0x60
> [Sat Oct 20 15:11:40 2018]  ret_from_fork+0x25/0x30

I did see this before. Please run ‘ echo t > /proc/sysrq-trigger’ and send kernel log us if you encountered this again.

Yan, Zheng

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux