On 4/14/23 13:37, Zhijian Li (Fujitsu) wrote:
On 14/04/2023 11:40, Guoqing Jiang wrote:
On 4/13/23 16:12, Zhijian Li (Fujitsu) wrote:
On 13/04/2023 15:35, Guoqing Jiang wrote:
Hi,
I take a closer look today.
On 4/12/23 09:15, Zhijian Li (Fujitsu) wrote:
On 11/04/2023 20:26, Leon Romanovsky wrote:
On Tue, Apr 11, 2023 at 02:43:46AM +0000, Zhijian Li (Fujitsu) wrote:
On 10/04/2023 21:10, Guoqing Jiang wrote:
On 4/10/23 20:08, Leon Romanovsky wrote:
On Mon, Apr 10, 2023 at 06:43:03AM +0000, Li Zhijian wrote:
The warning occurs when destroying PD whose reference count is not zero.
Precodition: clt_path->s.con_num is 2.
So 2 cm connection will be created as below:
CPU0 CPU1
init_conns { |
create_cm() // a. con[0] created |
| a'. rtrs_clt_rdma_cm_handler() {
| rtrs_rdma_addr_resolved()
| create_con_cq_qp(con); << con[0]
| }
| in this moment, refcnt of PD was increased to 2+
What do you mean "refcnt of PD"? usecnt in struct ib_pd or dev_ref.
I mean usecnt in struct ib_pd
|
create_cm() // b. cid = 1, failed |
destroy_con_cq_qp() |
rtrs_ib_dev_put() |
dev_free() |
ib_dealloc_pd(dev->ib_pd) << PD |
is destroyed, but refcnt is |
still greater than 0 |
Assuming you mean "pd->usecnt". We only allocate pd in con[0] by rtrs_ib_dev_find_or_add,
if con[1] failed to create cm, then alloc_path_reqs -> ib_alloc_mr -> atomic_inc(&pd->usecnt)
The above can't be invoked, right?
can't be triggered. Is there other places could increase the refcnt?
Yes, when create a qp, it will also associate to this PD, that also mean refcnt of PD will be increased.
When con[0](create_con_cq_qp) succeeded, refcnt of PD will be 2. and then when con[1] failed, since
QP didn't create, refcnt of PD is still 2. con[1]'s cleanup will destroy the PD(ib_dealloc_pd) since dev_ref = 1, after that its
refcnt is still 1.
I can see the path increase usecnt to 1.
rtrs_cq_qp_create -> create_qp
-> rdma_create_qp
-> ib_create_qp
-> create_qp
-> ib_qp_usecnt_inc which increases pd->usecnt
Where is another place to increase usecnt to 2?
It should be
ib_create_qp ...
-> rxe_create_qp
-> rxe_qp_from_init
-> rxe_get(pd) <<< pd's refcnt will be increased.
Isn't rxe_get just increase elem->ref_cnt?
https://elixir.bootlin.com/linux/v6.3-rc6/source/drivers/infiniband/sw/rxe/rxe_pool.c#L240
Then what is the appropriate time to call destroy_con_cq_qp for this scenario?
Otherwise there could be memory leak.
we must ensure QP in con[0] is closed before destroying the PD.
Currently destroy_con_cq_qp() subroutine will close the opened QP first.
Let me try another way, with below change, rtrs_ib_dev_put can't be called
from destroy_con_cq_qp, right?
Not really, con[0]->has_dev is true, so con[0]'s cleanup will call rtrs_ib_dev_put()
Without this patch, when con[1] failed, con[1]'s cleanup will be called first. then call con[0]'s cleanup.
After this change, con[1]'s cleanup will not call rtrs_ib_dev_put, but it will be called the later con[0]'s cleanup.
But rtrs_ib_dev_put relies on dev_ref, if con[1] returns earlier without
decrease dev_ref
(it is shared among connections), how rtrs_ib_dev_put can be called?
Thanks,
Guoqing