" [ 1041.051398] rdma_rxe: loaded [ 1041.054536] infiniband rxe0: set active [ 1041.054540] infiniband rxe0: added enp0s8 [ 1086.287975] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1086.311546] rdma_rxe: cqe(1) < current # elements in queue (6) [ 1086.399826] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1090.232785] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1090.255985] rdma_rxe: cqe(1) < current # elements in queue (6) [ 1090.345427] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1094.024322] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1094.047569] rdma_rxe: cqe(1) < current # elements in queue (6) [ 1094.136103] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1098.989344] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1099.015065] rdma_rxe: cqe(1) < current # elements in queue (6) [ 1099.112970] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1103.040076] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1103.062831] rdma_rxe: cqe(1) < current # elements in queue (6) [ 1103.151157] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1116.121772] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1116.144951] rdma_rxe: cqe(1) < current # elements in queue (6) [ 1116.234607] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1131.655486] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1131.678700] rdma_rxe: cqe(1) < current # elements in queue (6) [ 1131.766776] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1175.517166] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1175.540258] rdma_rxe: cqe(1) < current # elements in queue (6) [ 1175.628214] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1190.760122] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1190.783243] rdma_rxe: cqe(1) < current # elements in queue (6) [ 1190.871167] rdma_rxe: cqe(32768) > max_cqe(32767) [ 1249.651921] rdma_rxe: rxe-pd pool destroyed with unfree'd elem [ 1249.651927] rdma_rxe: rxe-qp pool destroyed with unfree'd elem [ 1249.651929] rdma_rxe: rxe-cq pool destroyed with unfree'd elem [ 1255.227916] rdma_rxe: unloaded " After I added a rxe device on the netdev, then run rdma-core test tools. Then I remove rxe device, in the end, I unloaded rdma_rxe kernel modules. I found the above logs. " [ 1249.651921] rdma_rxe: rxe-pd pool destroyed with unfree'd elem [ 1249.651927] rdma_rxe: rxe-qp pool destroyed with unfree'd elem [ 1249.651929] rdma_rxe: rxe-cq pool destroyed with unfree'd elem " It seems that some resources leak. I will make further investigations. Zhu Yanjun On Fri, Jun 4, 2021 at 2:58 AM Jason Gunthorpe <jgg@xxxxxxxxxx> wrote: > > On Tue, May 25, 2021 at 04:37:42PM -0500, Bob Pearson wrote: > > This series of patches implement memory windows for the rdma_rxe > > driver. This is a shorter reimplementation of an earlier patch set. > > They apply to and depend on the current for-next linux rdma tree. > > > > Signed-off-by: Bob Pearson <rpearsonhpe@xxxxxxxxx> > > --- > > v8: > > Dropped wr.mw.flags in the rxe_send_wr struct in rdma_user_rxe.h. > > v7: > > Fixed a duplicate INIT_RDMA_OBJ_SIZE(ib_mw, ...) in rxe_verbs.c. > > v6: > > Added rxe_ prefix to subroutine names in lines that changed > > from Zhu's review of v5. > > v5: > > Fixed a typo in 10th patch. > > v4: > > Added a 10th patch to check when MRs have bound MWs > > and disallow dereg and invalidate operations. > > v3: > > cleaned up void return and lower case enums from > > Zhu's review. > > v2: > > cleaned up an issue in rdma_user_rxe.h > > cleaned up a collision in rxe_resp.c > > > > Bob Pearson (10): > > RDMA/rxe: Add bind MW fields to rxe_send_wr > > RDMA/rxe: Return errors for add index and key > > RDMA/rxe: Enable MW object pool > > RDMA/rxe: Add ib_alloc_mw and ib_dealloc_mw verbs > > RDMA/rxe: Replace WR_REG_MASK by WR_LOCAL_OP_MASK > > RDMA/rxe: Move local ops to subroutine > > RDMA/rxe: Add support for bind MW work requests > > RDMA/rxe: Implement invalidate MW operations > > RDMA/rxe: Implement memory access through MWs > > RDMA/rxe: Disallow MR dereg and invalidate when bound > > This is all ready now, right? > > Can you put the userspace part on the github please? > > Jason