In mlx4_main.c:do_slave_init(), the function is supposed to queue up each work struct. However, it checks to make sure the sriov support isn't going down first. When it is going down, it doesn't queue up the work struct, which results in us leaking the work struct at the end of the function. As a fix, make sure that if we don't queue up the work struct, then we kfree it instead. The routine was also sub-optimal in its loop operations. Instead of taking and releasing a spin lock over and over again, let's just take it once, and quickly loop through what needs to be done under the spin lock and then release it. Signed-off-by: Doug Ledford <dledford@xxxxxxxxxx> --- drivers/infiniband/hw/mlx4/main.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c index 064454aee863..3f21a5565af2 100644 --- a/drivers/infiniband/hw/mlx4/main.c +++ b/drivers/infiniband/hw/mlx4/main.c @@ -2681,19 +2681,21 @@ static void do_slave_init(struct mlx4_ib_dev *ibdev, int slave, int do_init) kfree(dm[i]); goto out; } - } - /* initialize or tear down tunnel QPs for the slave */ - for (i = 0; i < ports; i++) { INIT_WORK(&dm[i]->work, mlx4_ib_tunnels_update_work); dm[i]->port = first_port + i + 1; dm[i]->slave = slave; dm[i]->do_init = do_init; dm[i]->dev = ibdev; - spin_lock_irqsave(&ibdev->sriov.going_down_lock, flags); - if (!ibdev->sriov.is_going_down) - queue_work(ibdev->sriov.demux[i].ud_wq, &dm[i]->work); - spin_unlock_irqrestore(&ibdev->sriov.going_down_lock, flags); } + /* initialize or tear down tunnel QPs for the slave */ + spin_lock_irqsave(&ibdev->sriov.going_down_lock, flags); + if (!ibdev->sriov.is_going_down) + for (i = 0; i < ports; i++) + queue_work(ibdev->sriov.demux[i].ud_wq, &dm[i]->work); + else + for (i = 0; i < ports; i++) + kfree(dm[i]); + spin_unlock_irqrestore(&ibdev->sriov.going_down_lock, flags); out: kfree(dm); return; -- 2.4.3 -- To unsubscribe from this list: send the line "unsubscribe linux-rdma" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html