On Fri, Dec 11, 2020 at 08:58:09AM +0100, Jinpu Wang wrote: > On Fri, Dec 11, 2020 at 8:53 AM Jinpu Wang <jinpu.wang@xxxxxxxxxxxxxxx> wrote: > > > > On Fri, Dec 11, 2020 at 8:26 AM Leon Romanovsky <leon@xxxxxxxxxx> wrote: > > > > > > On Fri, Dec 11, 2020 at 07:50:13AM +0100, Jinpu Wang wrote: > > > > On Fri, Dec 11, 2020 at 3:35 AM Guoqing Jiang > > > > <guoqing.jiang@xxxxxxxxxxxxxxx> wrote: > > > > > > > > > > > > > > > > > > > > On 12/10/20 15:56, Jinpu Wang wrote: > > > > > > On Wed, Dec 9, 2020 at 5:45 PM Jack Wang <jinpu.wang@xxxxxxxxxxxxxxx> wrote: > > > > > >> > > > > > >> If there are many establishments/teardowns, we need to make sure > > > > > >> we do not consume too much system memory. Thus let on going > > > > > >> session closing to finish before accepting new connection. > > > > > >> > > > > > >> Inspired by commit 777dc82395de ("nvmet-rdma: occasionally flush ongoing controller teardown") > > > > > >> Signed-off-by: Jack Wang <jinpu.wang@xxxxxxxxxxxxxxx> > > > > > >> Reviewed-by: Guoqing Jiang <guoqing.jiang@xxxxxxxxxxxxxxx> > > > > > > Please ignore this one, it could lead to deadlock, due to the fact > > > > > > cma_ib_req_handler is holding > > > > > > mutex_lock(&listen_id->handler_mutex) when calling into > > > > > > rtrs_rdma_connect, we call close_work which will call rdma_destroy_id, > > > > > > which > > > > > > could try to hold the same handler_mutex, so deadlock. > > > > > > > > > > > > > > > > I am wondering if nvmet-rdma has the similar issue or not, if so, maybe > > > > > introduce a locked version of rdma_destroy_id. > > > > > > > > > > Thanks, > > > > > Guoqing > > > > > > > > No, I was wrong. I rechecked the code, it's not a valid deadlock, in > > > > cma_ib_req_handler, the conn_id is newly created in > > > > https://elixir.bootlin.com/linux/latest/source/drivers/infiniband/core/cma.c#L2185. > > > > > > > > Flush_workqueue will only flush close_work for any other cm_id, but > > > > not the newly created one conn_id, it has not associated with anything > > > > yet. > > > > > > > > The same applies to nvme-rdma. so it's a false alarm by lockdep. > > > > > > Leaving this without fix (proper lock annotation) is not right thing to > > > do, because everyone who runs rtrs code with LOCKDEP on will have same > > > "false alarm". > > > > > > So I recommend or not to take this patch or write it without LOCKDEP warning. > > Hi Leon, > > > > I'm thinking about the same, do you have a suggestion on how to teach > > LOCKDEP this is not really a deadlock, > > I do not know LOCKDEP well. > Found it myself, we can use lockdep_off > > https://elixir.bootlin.com/linux/latest/source/drivers/virtio/virtio_mem.c#L699 My personal experience from internal/external reviews shows that claims about false alarms in LOCKDEP warnings are almost always wrong. Thanks > > Thanks > > > > > Thanks > > > > > > Thanks > > > > > > > > > > > Regards! > > > > Jack