Re: Fw: Linux mask_msi_irq() question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




--- On Thu, 8/26/10, Grant Grundler <grundler@xxxxxxxxxxxxxxxx> wrote:

> From: Grant Grundler <grundler@xxxxxxxxxxxxxxxx>
> Subject: Re: Fw: Linux mask_msi_irq() question
> To: "Kanoj Sarcar" <kanojsarcar@xxxxxxxxx>
> Cc: "Grant Grundler" <grundler@xxxxxxxxxxxxxxxx>, linux-pci@xxxxxxxxxxxxxxx
> Date: Thursday, August 26, 2010, 9:44 PM
> On Wed, Aug 25, 2010 at 12:21:48AM
> -0700, Kanoj Sarcar wrote:
> ...
> > Hi Grant,
> > 
> > I think there are two different things in play here:
> > 
> > 1. Per PCIE or MSIX, is the device supposed to make
> sure it does
> > not issue a msix memwrite after it has sent the read
> completion
> > for the host's mask read?
> 
> Yes, I believe that's intent of the mask.
> It should be possible with PCI-e traces to determine if any
> device does that.
> 
> 
> > Alternatively, does each device thru a
> > vendor unique way, provide a barrier point by which at
> least
> > one host cpu knows that no more interrupt messages
> will creep
> > out of the device?
> 
> Yes - same answer.

I think for both the above points, its hard (if not impossible) 
to be sure that all devices that Linux currently supports provide 
the expected behavior under all traffic patterns. The conservative 
approach would be for platform or generic kernel code to protect 
itself against such devices.

> 
> > 
> > 2. On a given chipset with N cpus, how does a cpu
> initiating
> > the entry mask operation synchronize with the entry's
> current
> > destination cpu?
> 
> IPI is one mechanism. When the second processor completes
> the IPI
> sent from the first processor, we know the second processor
> has
> done whatever we've asked it to do.  TLB flushes are
> handled this
> way on some arches for example.

Probably would work on most architectures, except some complicated
ones. TLB flush only involves processor to processor communication,
but intr masking involves processor to processor, device to cpu A
and device to cpu B communication. Here cpu A initiates intr masking,
and cpu B is the current intr destination. If A<->B path communication
happens faster than device->RC->B path, there is a potential issue.

I think some of these problems can be handled if A ships the masking
operation (and mask readback/flush from device) over to B thru ipi, 
and B responds back to A either thru ipi or coherent memory.

> 
> I expect other atomic operations (e.g. spinlocks) would
> work too.
> It's just a bit more complicated since one needs to
> determine
> there are both no interrupts in flight and processing
> finishes
> for any interrupts currently being handled.
> 
> > What are the various cases here? Interrupt
> > rebalance? Device deinit? Others?
> 
> I don't offhand know all the cases. Interrupt rebalance is
> certainly one.
> 
> I'm primarily thinking of the simplest case where one CPU
> tries
> to mask the device (and does) but another CPU already has
> the MSI pending (or might have already started to handle
> it.)

Yes, agreed, some of our discussions above are relevant to this.

Kanoj

> 
> hth,
> grant
> 


      
--
To unsubscribe from this list: send the line "unsubscribe linux-pci" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [DMA Engine]     [Linux Coverity]     [Linux USB]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [Greybus]

  Powered by Linux