Hi Jason, On Tue, Jan 12, 2021 at 8:13 PM Jason Gunthorpe <jgg@xxxxxxxx> wrote: > > On Thu, Dec 17, 2020 at 03:18:58PM +0100, Jack Wang wrote: > > If there are many establishments/teardowns, we need to make sure > > we do not consume too much system memory. Thus let on going > > session closing to finish before accepting new connection. > > Then just limit it, why this scheme? Will think about it, thanks for the suggestion. > > > In cma_ib_req_handler, the conn_id is newly created holding > > handler_mutex when call this function, and flush_workqueue > > wait for close_work to finish, in close_work rdma_destroy_id > > will be called, which will hold the handler_mutex, but they > > are mutex for different rdma_cm_id. > > No, there are multiple handler locks held here, and the new one is > already marked nested, so isn't even the thing triggering lockdep. > > The locking for CM is already bonkers, I don't want to see drivers > turning off lockdep. How are you sure that work queue doesn't become > (and won't ever in the future) become entangled with the listening > handler_mutex? IIUC, only new created conn_id is passed and save in ULP. but I understand your concerns, I will drop this patch, think about other solution. Do you need a resend/rebase for the reset of the patchset? > > Jason Thanks!