On Fri, Dec 11, 2020 at 3:35 AM Guoqing Jiang <guoqing.jiang@xxxxxxxxxxxxxxx> wrote: > > > > On 12/10/20 15:56, Jinpu Wang wrote: > > On Wed, Dec 9, 2020 at 5:45 PM Jack Wang <jinpu.wang@xxxxxxxxxxxxxxx> wrote: > >> > >> If there are many establishments/teardowns, we need to make sure > >> we do not consume too much system memory. Thus let on going > >> session closing to finish before accepting new connection. > >> > >> Inspired by commit 777dc82395de ("nvmet-rdma: occasionally flush ongoing controller teardown") > >> Signed-off-by: Jack Wang <jinpu.wang@xxxxxxxxxxxxxxx> > >> Reviewed-by: Guoqing Jiang <guoqing.jiang@xxxxxxxxxxxxxxx> > > Please ignore this one, it could lead to deadlock, due to the fact > > cma_ib_req_handler is holding > > mutex_lock(&listen_id->handler_mutex) when calling into > > rtrs_rdma_connect, we call close_work which will call rdma_destroy_id, > > which > > could try to hold the same handler_mutex, so deadlock. > > > > I am wondering if nvmet-rdma has the similar issue or not, if so, maybe > introduce a locked version of rdma_destroy_id. > > Thanks, > Guoqing No, I was wrong. I rechecked the code, it's not a valid deadlock, in cma_ib_req_handler, the conn_id is newly created in https://elixir.bootlin.com/linux/latest/source/drivers/infiniband/core/cma.c#L2185. Flush_workqueue will only flush close_work for any other cm_id, but not the newly created one conn_id, it has not associated with anything yet. The same applies to nvme-rdma. so it's a false alarm by lockdep. Regards! Jack