> -----Original Message----- > From: Moni Shoua <monis@xxxxxxxxxxxx> > Sent: Wednesday, February 6, 2019 1:21 AM > To: Parav Pandit <parav@xxxxxxxxxxxx> > Cc: Leon Romanovsky <leon@xxxxxxxxxx>; Doug Ledford > <dledford@xxxxxxxxxx>; Jason Gunthorpe <jgg@xxxxxxxxxxxx>; Leon > Romanovsky <leonro@xxxxxxxxxxxx>; RDMA mailing list <linux- > rdma@xxxxxxxxxxxxxxx> > Subject: Re: [PATCH rdma-next 1/2] IB/mlx5: Protect against prefetch of > invalid MR > > > > struct pf_frame *head = NULL, *frame; > > > struct mlx5_core_mkey *mmkey; > > > struct mlx5_ib_mr *mr; > > > @@ -772,6 +777,10 @@ static int pagefault_single_data_segment(struct > > > mlx5_ib_dev *dev, u32 key, > > > switch (mmkey->type) { > > > case MLX5_MKEY_MR: > > > mr = container_of(mmkey, struct mlx5_ib_mr, mmkey); > > > + > > > + if (deferred) > > > + atomic_dec(&mr->num_pending_prefetch); > > > + > > > Work item handler is still running on MR at this point and > flush_workqueue would skip to flush. > > atomic_dec() should be after completing pagefault_mr(() where we are > done with the MR. > Here, we are in srcu critical section and dereg_mr() waits for it after flushing > the workqueue with synchronize_srcu() okey