Re: NFS regression between 5.17 and 5.18

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/13/22 10:59 AM, Chuck Lever III wrote:
>>>
>>> Ran a test with -rc6 and this time see a hung task trace on the
>>> console as well
>>> as an NFS RPC error.
>>>
>>> [32719.991175] nfs: RPC call returned error 512
>>> .
>>> .
>>> .
>>> [32933.285126] INFO: task kworker/u145:23:886141 blocked for more
>>> than 122 seconds.
>>> [32933.293543]       Tainted: G S                5.18.0-rc6 #1
>>> [32933.299869] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs"
>>> disables this
>>> message.
>>> [32933.308740] task:kworker/u145:23 state:D stack:    0 pid:886141
>>> ppid:     2
>>> flags:0x00004000
>>> [32933.318321] Workqueue: rpciod rpc_async_schedule [sunrpc]
>>> [32933.324524] Call Trace:
>>> [32933.327347]  <TASK>
>>> [32933.329785]  __schedule+0x3dd/0x970
>>> [32933.333783]  schedule+0x41/0xa0
>>> [32933.337388]  xprt_request_dequeue_xprt+0xd1/0x140 [sunrpc]
>>> [32933.343639]  ? prepare_to_wait+0xd0/0xd0
>>> [32933.348123]  ? rpc_destroy_wait_queue+0x10/0x10 [sunrpc]
>>> [32933.354183]  xprt_release+0x26/0x140 [sunrpc]
>>> [32933.359168]  ? rpc_destroy_wait_queue+0x10/0x10 [sunrpc]
>>> [32933.365225]  rpc_release_resources_task+0xe/0x50 [sunrpc]
>>> [32933.371381]  __rpc_execute+0x2c5/0x4e0 [sunrpc]
>>> [32933.376564]  ? __switch_to_asm+0x42/0x70
>>> [32933.381046]  ? finish_task_switch+0xb2/0x2c0
>>> [32933.385918]  rpc_async_schedule+0x29/0x40 [sunrpc]
>>> [32933.391391]  process_one_work+0x1c8/0x390
>>> [32933.395975]  worker_thread+0x30/0x360
>>> [32933.400162]  ? process_one_work+0x390/0x390
>>> [32933.404931]  kthread+0xd9/0x100
>>> [32933.408536]  ? kthread_complete_and_exit+0x20/0x20
>>> [32933.413984]  ret_from_fork+0x22/0x30
>>> [32933.418074]  </TASK>
>>>
>>> The call trace shows up again at 245, 368, and 491 seconds. Same
>>> task, same trace.
>>>
>>>
>>>
>>>
>>
>> That's very helpful. The above trace suggests that the RDMA code is
>> leaking a call to xprt_unpin_rqst().
> 
> IMHO this is unlikely to be related to the performance
> regression -- none of this code has changed in the past 5
> kernel releases. Could be a different issue, though.
> 
> As is often the case in these situations, the INFO trace
> above happens long after the issue that caused the missing
> unpin. So... unless Dennis has a reproducer that can trigger
> the issue frequently, I don't think there's much that can
> be extracted from that.

To be fair, I've only seen this one time and have had the performance regression
since -rc1.

> Also "nfs: RPC call returned error 512" suggests someone
> hit ^C at some point. It's always possible that the
> xprt_rdma_free() path is missing an unpin. But again,
> that's not likely to be related to performance.

I've checked our test code and after 10 minutes it does give up trying to do the
NFS copies and aborts (SIG_INT) the test.

So in all my tests and bisect attempts it seems the possibility to hit a slow
NFS operation that hangs for minutes has been possible for quite some time.
However in 5.18 it gets much worse.

Any likely places I should add traces to try and find out what's stuck or taking
time?

-Denny



[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux