On Wed, Feb 17, 2016 at 09:54:27PM +0000, Jakub Kicinski wrote: > [CC: linux-rt-users, can you guys help?] > > On Wed, 17 Feb 2016 08:22:13 +0100, Sean Nyekjær wrote: > > On 2016-02-16 22:02, Jakub Kici??ski wrote: > > > On Tue, 16 Feb 2016 08:37:11 +0100, Sean Nyekjær wrote: > > >> On 2016-02-15 23:08, Jakub Kicinski wrote: > > >>> On Mon, 15 Feb 2016 08:58:53 +0100, Sean Nyekjaer wrote: > > >>>> Working with RT patches reveals this driver have some spin_lock issues. > > >>>> > > >>>> spin_lock moved from sc16is7xx_reg_proc to protect to call to > > >>>> uart_handle_cts_change as it's the only call that isn't already > > >>>> protected in serial-core. > > >>> Sorry but this does not look fine. You are removing all async items > > >>> from the driver. This works for you because in RT code can sleep with > > >>> spin locks taken but for upstream it's not acceptable. Did you test > > >>> this patch with upstream kernel and lockdep enabled? > > >> I have a 4.1.y with cherry-pick of the commits from tty-testing > > >> regarding sc16is7xx. (exept the new gpio api) > > >> On our custom board we have 3 sc16is750 on a single SPI channel. I'm > > >> using an FTDI chip(connected to a test PC) TX wired to the 3x RX pins on > > >> the sc16is750. > > >> The problem is easy reproducible just by creating data from the PC at > > >> 115200 baud, causes the kernel oops almost immediately. > > >> > > >> I have not seen any issues with a vanilla 4.1.y, only when i'm applying > > >> the RT patch and loading the system. > > > Thanks for providing the details, I think there is a misunderstanding > > > between us. Allow me to explain again what I meant ;) > > > > > > I do acknowledge that there may be a problem with the driver on RT > > > kernels and I do trust you that you tested your patch on RT kernel > > > with latest version of the driver cherry-picked from mainline and it > > > works there. > > > > > > However, IIUC you are proposing your patch to be included in mainline. > > > > > > What I'm trying to say is that in mainline we need the async works > > > which you remove in your patch because we cannot perform SPI/I2C > > > blocking I/O under a spinlock... If you compile mainline with your > > > patch you will see a slew of "sleeping in atomic context" warnings. > > > > > > So basically the questions are do you want this patch in mainline and > > > have you tested it on non-RT kernels? (Please don't be discouraged - > > > there definitely must be a bug somewhere if you're seeing crashes, but > > > I'm afraid the proposed solutions is not good enough...) > > > > > > Thanks > > Yeah the first priority must be for the driver to work on the mainline > > kernel :-) > > > > I just wanted to flag a problem with the driver with the RT patches > > applied. I thought > > RT patch (maybe) revaled that the driver have underlying problem. > > > > So maybe this patch should be included in the RT patch? > > No i have not tested this with a non-RT kernel, maybe i should try :-) > > I've looked at the stack dump you posted in previous email. It looks > like we deadlock on the on spinlock_irqsave() in queue_kthread_work() > because our interrupt is not threaded!? I thought all interrupts are > threaded in RT by default. I believe the actual problem is that the handler is being registered w/ IRQF_ONESHOT. I'm not sure that even makes sense in this scenario, but is perhaps an unintended carry over from request_threaded_irq -> request_irq transition in 9e6f4ca3 ("sc16is7xx: use kthread_worker for tx_work and irq"). Sean- Does this solve your problem? Josh diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c index e78fa99..1bb5c0e 100644 --- a/drivers/tty/serial/sc16is7xx.c +++ b/drivers/tty/serial/sc16is7xx.c @@ -1257,7 +1257,7 @@ static int sc16is7xx_probe(struct device *dev, /* Setup interrupt */ ret = devm_request_irq(dev, irq, sc16is7xx_irq, - IRQF_ONESHOT | flags, dev_name(dev), s); + flags, dev_name(dev), s); if (!ret) return 0; -- To unsubscribe from this list: send the line "unsubscribe linux-serial" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html