>>>-----Original Message----- >>>From: Pavel Shilovsky <piastryyy@xxxxxxxxx> >>>Sent: Thursday, May 9, 2019 11:01 AM >>>To: Long Li <longli@xxxxxxxxxxxxx> >>>Cc: Steve French <sfrench@xxxxxxxxx>; linux-cifs <linux- >>>cifs@xxxxxxxxxxxxxxx>; samba-technical <samba-technical@xxxxxxxxxxxxxxx>; >>>Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx> >>>Subject: Re: [Patch (resend) 5/5] cifs: Call MID callback before destroying >>>transport >>> >>>пт, 5 апр. 2019 г. в 14:39, Long Li <longli@xxxxxxxxxxxxxxxxx>: >>>> >>>> From: Long Li <longli@xxxxxxxxxxxxx> >>>> >>>> When transport is being destroyed, it's possible that some processes >>>> may hold memory registrations that need to be deregistred. >>>> >>>> Call them first so nobody is using transport resources, and it can be >>>> destroyed. >>>> >>>> Signed-off-by: Long Li <longli@xxxxxxxxxxxxx> >>>> --- >>>> fs/cifs/connect.c | 36 +++++++++++++++++++----------------- >>>> 1 file changed, 19 insertions(+), 17 deletions(-) >>>> >>>> diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c index >>>> 33e4d98..084756cf 100644 >>>> --- a/fs/cifs/connect.c >>>> +++ b/fs/cifs/connect.c >>>> @@ -528,22 +528,6 @@ cifs_reconnect(struct TCP_Server_Info *server) >>>> /* do not want to be sending data on a socket we are freeing */ >>>> cifs_dbg(FYI, "%s: tearing down socket\n", __func__); >>>> mutex_lock(&server->srv_mutex); >>>> - if (server->ssocket) { >>>> - cifs_dbg(FYI, "State: 0x%x Flags: 0x%lx\n", >>>> - server->ssocket->state, server->ssocket->flags); >>>> - kernel_sock_shutdown(server->ssocket, SHUT_WR); >>>> - cifs_dbg(FYI, "Post shutdown state: 0x%x Flags: 0x%lx\n", >>>> - server->ssocket->state, server->ssocket->flags); >>>> - sock_release(server->ssocket); >>>> - server->ssocket = NULL; >>>> - } else if (cifs_rdma_enabled(server)) >>>> - smbd_destroy(server); >>>> - server->sequence_number = 0; >>>> - server->session_estab = false; >>>> - kfree(server->session_key.response); >>>> - server->session_key.response = NULL; >>>> - server->session_key.len = 0; >>>> - server->lstrp = jiffies; >>>> >>>> /* mark submitted MIDs for retry and issue callback */ >>>> INIT_LIST_HEAD(&retry_list); >>>> @@ -556,7 +540,6 @@ cifs_reconnect(struct TCP_Server_Info *server) >>>> list_move(&mid_entry->qhead, &retry_list); >>>> } >>>> spin_unlock(&GlobalMid_Lock); >>>> - mutex_unlock(&server->srv_mutex); >>>> >>>> cifs_dbg(FYI, "%s: issuing mid callbacks\n", __func__); >>>> list_for_each_safe(tmp, tmp2, &retry_list) { @@ -565,6 +548,25 >>>> @@ cifs_reconnect(struct TCP_Server_Info *server) >>>> mid_entry->callback(mid_entry); >>>> } >>> >>>The original call was issuing callbacks without holding srv_mutex - callbacks >>>may take this mutex for its internal needs. With the proposed patch the >>>code will deadlock. >>> >>>Also the idea of destroying the socket first is to allow possible retries (from >>>callbacks) to return a proper error instead of trying to send anything through >>>the reconnecting socket. I will send a patch to revert this and follow your suggestion on putting smbd_destroy() to after all MIDs have been called. Your suggestion tested well. Thanks Long >>> >>>> >>>> + if (server->ssocket) { >>>> + cifs_dbg(FYI, "State: 0x%x Flags: 0x%lx\n", >>>> + server->ssocket->state, server->ssocket->flags); >>>> + kernel_sock_shutdown(server->ssocket, SHUT_WR); >>>> + cifs_dbg(FYI, "Post shutdown state: 0x%x Flags: 0x%lx\n", >>>> + server->ssocket->state, server->ssocket->flags); >>>> + sock_release(server->ssocket); >>>> + server->ssocket = NULL; >>>> + } else if (cifs_rdma_enabled(server)) >>>> + smbd_destroy(server); >>> >>>If we need to call smbd_destroy() *after* callbacks, let's just move it alone >>>without the rest of the code. >>> >>> >>>> + server->sequence_number = 0; >>>> + server->session_estab = false; >>>> + kfree(server->session_key.response); >>>> + server->session_key.response = NULL; >>>> + server->session_key.len = 0; >>>> + server->lstrp = jiffies; >>>> + >>>> + mutex_unlock(&server->srv_mutex); >>>> + >>>> do { >>>> try_to_freeze(); >>>> >>>> -- >>>> 2.7.4 >>>> >>> >>> >>>-- >>>Best regards, >>>Pavel Shilovsky