Re: [PATCH mlx5-next] RDMA/mlx5: Don't use cached IRQ affinity mask

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Aug 06, 2018 at 02:20:37PM -0500, Steve Wise wrote:
>
>
> On 8/1/2018 9:27 AM, Max Gurtovoy wrote:
> >
> >
> > On 8/1/2018 8:12 AM, Sagi Grimberg wrote:
> >> Hi Max,
> >
> > Hi,
> >
> >>
> >>> Yes, since nvmf is the only user of this function.
> >>> Still waiting for comments on the suggested patch :)
> >>>
> >>
> >> Sorry for the late response (but I'm on vacation so I have
> >> an excuse ;))
> >
> > NP :) currently the code works..
> >
> >>
> >> I'm thinking that we should avoid trying to find an assignment
> >> when stuff like irqbalance daemon is running and changing
> >> the affinitization.
> >
> > but this is exactly what Steve complained and Leon try to fix (and
> > break the connection establishment).
> > If this is the case and we all agree then we're good without Leon's
> > patch and without our suggestions.
> >
>
> I don't agree.  Currently setting certain affinity mappings breaks nvme
> connectivity.  I don't think that is desirable.  And mlx5 is broken in
> that it doesn't allow changing the affinity but silently ignores the
> change, which misleads the admin or irqbalance...

Exactly, I completely agree with Steve and don't understand any
rationale in the comments above. As a summery from my side:
NVMeOF is broken, but we are not going to fix and prohibit
from one specific driver to change affinity on the fly.

Nice.

Thanks

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Photo]     [Yosemite News]     [Yosemite Photos]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux