Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 7/23/2018 7:49 PM, Jason Gunthorpe wrote:
On Fri, Jul 20, 2018 at 04:25:32AM +0300, Max Gurtovoy wrote:

[ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18

queue 9 is not mapped (overlap).
please try the bellow:


This seems to work.  Here are three mapping cases:  each vector on its
own cpu, each vector on 1 cpu within the local numa node, and each
vector having all cpus in its numa node.  The 2nd mapping looks kinda
funny, but I think it achieved what you wanted?  And all the cases
resulted in successful connections.


Thanks for testing this.
I slightly improved the setting of the left CPUs and actually used Sagi's
initial proposal.

Sagi,
please review the attached patch and let me know if I should add your
signature on it.
I'll run some perf test early next week on it (meanwhile I run login/logout
with different num_queues successfully and irq settings).

Steve,
It will be great if you can apply the attached in your system and send your
findings.

Regards,
Max,

So the conlusion to this thread is that Leon's mlx5 patch needs to wait
until this block-mq patch is accepted?

Yes, since nvmf is the only user of this function.
Still waiting for comments on the suggested patch :)


Thanks,
Jason

--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux