Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 7/19/2018 8:25 PM, Max Gurtovoy wrote:
>
>>>> [ 2032.194376] nvme nvme0: failed to connect queue: 9 ret=-18
>>>
>>> queue 9 is not mapped (overlap).
>>> please try the bellow:
>>>
>>
>> This seems to work.  Here are three mapping cases:  each vector on its
>> own cpu, each vector on 1 cpu within the local numa node, and each
>> vector having all cpus in its numa node.  The 2nd mapping looks kinda
>> funny, but I think it achieved what you wanted?  And all the cases
>> resulted in successful connections.
>>
>
> Thanks for testing this.
> I slightly improved the setting of the left CPUs and actually used
> Sagi's initial proposal.
>
> Sagi,
> please review the attached patch and let me know if I should add your
> signature on it.
> I'll run some perf test early next week on it (meanwhile I run
> login/logout with different num_queues successfully and irq settings).
>
> Steve,
> It will be great if you can apply the attached in your system and send
> your findings.

Sorry, I got side tracked.  I'll try and test this today and report back.

Steve.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux