Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I've tested this patch and seems problematic at this moment.
maybe this is because of the bug that Steve mentioned in the NVMe mailing list. Sagi mentioned that we should fix it in the NVMe/RDMA initiator and I'll run his suggestion as well.
BTW, when I run the blk_mq_map_queues it works for every irq affinity.

On 7/16/2018 1:30 PM, Leon Romanovsky wrote:
On Mon, Jul 16, 2018 at 01:23:24PM +0300, Sagi Grimberg wrote:
Leon, I'd like to see a tested-by tag for this (at least
until I get some time to test it).

Of course.

Thanks


The patch itself looks fine to me.


-Max.
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux