On Tue, Aug 28, 2018 at 02:29:05PM +0300, Leon Romanovsky wrote: > From: Majd Dibbiny <majd@xxxxxxxxxxxx> > > In the current code, the TX affinity is per RoCE device, which can cause > unfairness between different contexts (e.g. if we open two contexts, and each > open 10 QPs concurrently, all of the QPs of the first context might end up on > the first port instead of distributed on the two ports as expected). > > To overcome this unfairness between processes, we maintain per device TX > affinity, and per process TX affinity. > > The allocation algorithm is as follow: > > 1. Hold two tx_port_affinity atomic variables, one per RoCE device and one per > ucontext. Both initialized to 0. > > 2. In mlx5_ib_alloc_ucontext do: > 2.1. ucontext.tx_port_affinity = device.tx_port_affinity > 2.2. device.tx_port_affinity += 1 > > 3. In modify QP INIT2RST: > 3.1. qp.tx_port_affinity = ucontext.tx_port_affinity % MLX5_PORT_NUM > 3.2. ucontext.tx_port_affinity += 1 > > Signed-off-by: Majd Dibbiny <majd@xxxxxxxxxxxx> > Reviewed-by: Moni Shoua <monis@xxxxxxxxxxxx> > Signed-off-by: Leon Romanovsky <leonro@xxxxxxxxxxxx> > --- > drivers/infiniband/hw/mlx5/main.c | 6 ++++++ > drivers/infiniband/hw/mlx5/mlx5_ib.h | 4 +++- > drivers/infiniband/hw/mlx5/qp.c | 32 ++++++++++++++++++++++++++++---- > 3 files changed, 37 insertions(+), 5 deletions(-) Applied to for-next > diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c > index 0510dd7c13c5..547fd4f50bd4 100644 > --- a/drivers/infiniband/hw/mlx5/main.c > +++ b/drivers/infiniband/hw/mlx5/main.c > @@ -1855,6 +1855,12 @@ static struct ib_ucontext *mlx5_ib_alloc_ucontext(struct ib_device *ibdev, > context->lib_caps = req.lib_caps; > print_lib_caps(dev, context->lib_caps); > > + if (mlx5_lag_is_active(dev->mdev)) { > + u8 port = mlx5_core_native_port_num(dev->mdev); > + atomic_set(&context->tx_port_affinity, > + atomic_add_return(1, &dev->roce[port].tx_port_affinity)); Check patch says there is a missing new line here.. I fixed it and the various unnecessary long lines.. Thanks, Jason