Re: [PATCH for-rc 2/2] RDMA/hns: Bugfix for flush cqe in case softirq and multi-process

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 25, 2019 at 11:07:34PM +0800, Lijun Ou wrote:
> For the case one process modify qp to error while the other process
> is posting send or recv, the race of flush cqe will happen. To solve
> this problem, the lock should be hold between post send, post recv
> with modify qp verbs separately.
>
> Further more, this patch uses workqueue to do flush cqe process for
> hip08 as under some cases post send or post recv may called under
> softirq context, and it will lead to following calltrace with
> current driver.
>
> [ 5343.812237] Call trace:
> [ 5343.815448] [<ffff00000808ab38>] dump_backtrace+0x0/0x280
> [ 5343.821115] [<ffff00000808addc>] show_stack+0x24/0x30
> [ 5343.826605] [<ffff000008d84cb4>] dump_stack+0x98/0xb8
> [ 5343.831966] [<ffff0000080fda44>] __schedule_bug+0x64/0x80
> [ 5343.837605] [<ffff000008d9b1ec>] __schedule+0x6bc/0x7fc
> [ 5343.843010] [<ffff000008d9b360>] schedule+0x34/0x8c
> [ 5343.848133] [<ffff000008d9ee80>] schedule_timeout+0x1d8/0x3cc
> [ 5343.854087] [<ffff000008d9d72c>] __down+0x84/0xdc
> [ 5343.859114] [<ffff000008124250>] down+0x54/0x6c
> [ 5343.866446] [<ffff000001025bd4>] hns_roce_cmd_mbox+0x68/0x2cc [hns_roce]
> [ 5343.874439] [<ffff000001063f70>] hns_roce_v2_modify_qp+0x4f4/0x1024
> [hns_roce_pci]
> [ 5343.882594] [<ffff00000106570c>] hns_roce_v2_post_recv+0x2a4/0x330
> [hns_roce_pci]
> [ 5343.890872] [<ffff0000010aa138>] nvme_rdma_post_recv+0x88/0xf8 [nvme_rdma]
> [ 5343.898156] [<ffff0000010ab3a8>] __nvme_rdma_recv_done.isra.40+0x110/0x1f0
> [nvme_rdma]
> [ 5343.906453] [<ffff0000010ab4b4>] nvme_rdma_recv_done+0x2c/0x38 [nvme_rdma]
> [ 5343.918428] [<ffff000000e34e04>] __ib_process_cq+0x7c/0xf0 [ib_core]
> [ 5343.927135] [<ffff000000e34fb8>] ib_poll_handler+0x30/0x90 [ib_core]
> [ 5343.933900] [<ffff00000859db94>] irq_poll_softirq+0xf8/0x150
> [ 5343.939825] [<ffff0000080818d0>] __do_softirq+0x140/0x2ec
> [ 5343.945573] [<ffff0000080d6f10>] run_ksoftirqd+0x48/0x5c
> [ 5343.951258] [<ffff0000080f9064>] smpboot_thread_fn+0x190/0x1d4
> [ 5343.957311] [<ffff0000080f441c>] kthread+0x10c/0x138
> [ 5343.962518] [<ffff0000080855dc>] ret_from_fork+0x10/0x18
>
> Fixes: 0425e3e6e0c7 ("RDMA/hns: Support flush cqe for hip08 in kernel space")
> Signed-off-by: Yixian Liu <liuyixian@xxxxxxxxxx>
> Signed-off-by: Lijun Ou <oulijun@xxxxxxxxxx>
> ---
>  drivers/infiniband/hw/hns/hns_roce_device.h |  14 +++
>  drivers/infiniband/hw/hns/hns_roce_hw_v2.c  | 144 ++++++++++++----------------
>  drivers/infiniband/hw/hns/hns_roce_main.c   |  12 +++
>  drivers/infiniband/hw/hns/hns_roce_qp.c     |  44 +++++++++
>  4 files changed, 130 insertions(+), 84 deletions(-)

<...>

> +static int hns_roce_v2_create_workq(struct hns_roce_dev *hr_dev)
> +{
> +	char workq_name[HNS_ROCE_WORKQ_NAME_LEN];
> +	struct device *dev = hr_dev->dev;
> +
> +	snprintf(workq_name, HNS_ROCE_WORKQ_NAME_LEN - 1, "%s_flush_wq",
> +		 hr_dev->ib_dev.name);
> +
> +	hr_dev->flush_workq = create_singlethread_workqueue(workq_name);

I'm impressed with your ability to solve all bugs with extra workqueue.
You are adding them faster than we are removing.

Please find another way to solve your locking issues without extra
workqueue.

Thanks



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux