Re: [PATCH RFC v2 02/18] irq/dev-msi: Add support for a new DEV_MSI irq domain

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Thomas,

On 8/6/2020 1:21 PM, Thomas Gleixner wrote:
Megha,

"Dey, Megha" <megha.dey@xxxxxxxxx> writes:
On 8/6/2020 10:10 AM, Thomas Gleixner wrote:
If the DEV/MSI domain has it's own per IR unit resource management, then
you need one per IR unit.

If the resource management is solely per device then having a domain per
device is the right choice.
The dev-msi domain can be used by other devices if they too would want
to follow the vector->intel IR->dev-msi IRQ hierarchy.  I do create
one dev-msi IRQ domain instance per IR unit. So I guess for this case,
it makes most sense to have a dev-msi IRQ domain per IR unit as
opposed to create one per individual driver..
I'm not really convinced. I looked at the idxd driver and that has it's
own interrupt related resource management for the IMS slots and provides
the mask,unmask callbacks for the interrupt chip via this crude platform
data indirection.

So I don't see the value of the dev-msi domain per IR unit. The domain
itself does not provide much functionality other than indirections and
you clearly need per device interrupt resource management on the side
and a customized irq chip. I rather see it as a plain layering
violation.

The point is that your IDXD driver manages the per device IMS slots
which is a interrupt related resource. The story would be different if
the IMS slots would be managed by some central or per IR unit entity,
but in that case you'd need IMS specific domain(s).

So the obvious consequence of the hierarchical irq design is:

    vector -> IR -> IDXD

which makes the control flow of allocating an interrupt for a subdevice
straight forward following the irq hierarchy rules.

This still wants to inherit the existing msi domain functionality, but
the amount of code required is small and removes all these pointless
indirections and integrates the slot management naturally.

If you expect or know that there are other devices coming up with IMS
integrated then most of that code can be made a common library. But for
this to make sense, you really want to make sure that these other
devices do not require yet another horrible layer of indirection.
Yes Thomas, for now this may look odd since there is only one device using this IRQ domain. But there will be other devices following suit, hence I have added all the IRQ chip/domain bits in a separate file in drivers/irqchip in the next version of patches. I'll submit the patches shortly and it will be great if I
can get more feedback on it.
A side note: I just read back on the specification and stumbled over
the following gem:

  "IMS may also optionally support per-message masking and pending bit
   status, similar to the per-vector mask and pending bit array in the
   PCI Express MSI-X capability."

Optionally? Please tell the hardware folks to make this mandatory. We
have enough pain with non maskable MSI interrupts already so introducing
yet another non maskable interrupt trainwreck is not an option.

It's more than a decade now that I tell HW people not to repeat the
non-maskable MSI failure, but obviously they still think that
non-maskable interrupts are a brilliant idea. I know that HW folks
believe that everything they omit can be fixed in software, but they
have to finally understand that this particular issue _cannot_ be fixed
at all.
hmm, I asked the hardware folks and they have informed me that all IMS devices will support per vector masking/pending bit. This will be updated in the next SIOV
spec which will be published soon.

Thanks,

         tglx



[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux