On Tue, 23 Feb 2021, Song Bao Hua (Barry Song) wrote: > > > > Regarding m68k, your analysis overlooks the timing issue. E.g. patch > > 11/32 could be a problem because removing the irqsave would allow PDMA > > transfers to be interrupted. Aside from the timing issues, I agree > > with your analysis above regarding m68k. > > You mentioned you need realtime so you want an interrupt to be able to > preempt another one. That's not what I said. But for the sake of discussion, yes, I do know people who run Linux on ARM hardware (if Android vendor kernels can be called "Linux") and who would benefit from realtime support on those devices. > Now you said you want an interrupt not to be preempted as it will make a > timing issue. mac_esp deliberately constrains segment sizes so that it can harmlessly disable interrupts for the duration of the transfer. Maybe the irqsave in this driver is over-cautious. Who knows? The PDMA timing problem relates to SCSI bus signalling and the tolerance of real- world SCSI devices to same. The other problem is that the PDMA logic circuit is undocumented hardware. So there may be further timing requirements lurking there. Therefore, patch 11/32 is too risky. > If this PDMA transfer will have some problem when it is preempted, I > believe we need some enhanced ways to handle this, otherwise, once we > enable preempt_rt or threaded_irq, it will get the timing issue. so here > it needs a clear comment and IRQF_NO_THREAD if this is the case. > People who require fast response times cannot expect random drivers or platforms to meet such requirements. I fear you may be asking too much from Mac Quadra machines. > > > > With regard to other architectures and platforms, in specific cases, > > e.g. where there's never more than one IRQ involved, then I could > > agree that your assumptions probably hold and an irqsave would be > > probably redundant. > > > > When you find a redundant irqsave, to actually patch it would bring a > > risk of regression with little or no reward. It's not my place to veto > > this entire patch series on that basis but IMO this kind of churn is > > misguided. > > Nope. > > I would say the real misguidance is that the code adds one lock while it > doesn't need the lock. Easily we can add redundant locks or exaggerate > the coverage range of locks, but the smarter way is that people add > locks only when they really need the lock by considering concurrency and > realtime performance. > You appear to be debating a strawman. No-one is advocating excessive locking in new code. > Thanks > Barry >