Hi Roger, On Tue, Oct 27, 2015 at 11:37:03AM +0200, Roger Quadros wrote: > On 26/10/15 23:23, Brian Norris wrote: > > I'm not too familiar with OMAP platforms, and I might have missed out on > > prior discussions/context, so please forgive if I'm asking silly or old > > questions here. > > No worries at all. > > > > > On Fri, Sep 18, 2015 at 05:53:22PM +0300, Roger Quadros wrote: > >> - Remove NAND IRQ handling from omap-gpmc driver, share the GPMC IRQ > >> with the omap2-nand driver and handle NAND IRQ events in the NAND driver. > >> This causes performance increase when using prefetch-irq mode. > >> 30% increase in read, 17% increase in write in prefetch-irq mode. > > > > Have you pinpointed the exact causes for the performance increase, or > > can you give an educated guess? AIUI, you're reducing the number of > > interrupts needed for NAND prefetch mode, but you're also removing a bit > > of abstraction and implementing hooks that look awfully like the > > existing abstractions: > > > > + int (*nand_irq_enable)(enum gpmc_nand_irq irq); > > + int (*nand_irq_disable)(enum gpmc_nand_irq irq); > > + void (*nand_irq_clear)(enum gpmc_nand_irq irq); > > + u32 (*nand_irq_status)(void); > > > > That's not really a problem if there's a good reason for them (brcmnand > > implements similar hooks because of quirks in the implementation of > > interrupts across various BRCM SoCs, and it's not worth writing irqchip > > drivers for those cases). I'm mainly curious for an explanation. > > I have both implementations with me. My guess is that the 20% performance > gain is due to absence of irqchip/irqdomain translation code. > I haven't investigated further though. I don't have much context for whether this makes sense or not. According to your tests, you're getting ~800K interrupts over ~15 seconds. So should you start noticing performance hits due to abstraction at 53K interrupts per second? But anyway, I'm not sure that completely answered my question. My question was whether you were removing the irqchip code solely for performance reasons, or are there others? > Another concern I have is that I'm not using any locking around > gpmc_nand_irq_enable/disable(). Could this pose problems in multiple NAND > use cases? My understanding is that it should not as the controller access > is serialized between multiple NAND chips. Right, if you're touching just a NAND interrupt, and it's only used by a single instance of this NAND controller, then the NAND controller serialization code will handle this for you. > However I do need to add some locking as the GPMC_IRQENABLE register is shared > between NAND and GPMC driver. > > NOTE: We are not using prefetch-irq mode for any of the OMAP boards because > of lesser performance than prefetch-polled mode. So if the less performance > for an unused mode is a lesser concern compared to cleaner code then > I can resend this with the irqdomain implementation. > > Below are performance logs of irqdomain vs hooks. > > -- > cheers, > -roger > > test logs. [snip] Brian -- To unsubscribe from this list: send the line "unsubscribe linux-omap" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html