This is a note to let you know that I've just added the patch titled TTY: n_hdlc, fix lockdep false positive to the 4.4-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: tty-n_hdlc-fix-lockdep-false-positive.patch and it can be found in the queue-4.4 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From e9b736d88af1a143530565929390cadf036dc799 Mon Sep 17 00:00:00 2001 From: Jiri Slaby <jslaby@xxxxxxx> Date: Thu, 26 Nov 2015 19:28:26 +0100 Subject: TTY: n_hdlc, fix lockdep false positive From: Jiri Slaby <jslaby@xxxxxxx> commit e9b736d88af1a143530565929390cadf036dc799 upstream. The class of 4 n_hdls buf locks is the same because a single function n_hdlc_buf_list_init is used to init all the locks. But since flush_tx_queue takes n_hdlc->tx_buf_list.spinlock and then calls n_hdlc_buf_put which takes n_hdlc->tx_free_buf_list.spinlock, lockdep emits a warning: ============================================= [ INFO: possible recursive locking detected ] 4.3.0-25.g91e30a7-default #1 Not tainted --------------------------------------------- a.out/1248 is trying to acquire lock: (&(&list->spinlock)->rlock){......}, at: [<ffffffffa01fd020>] n_hdlc_buf_put+0x20/0x60 [n_hdlc] but task is already holding lock: (&(&list->spinlock)->rlock){......}, at: [<ffffffffa01fdc07>] n_hdlc_tty_ioctl+0x127/0x1d0 [n_hdlc] other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(&list->spinlock)->rlock); lock(&(&list->spinlock)->rlock); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by a.out/1248: #0: (&tty->ldisc_sem){++++++}, at: [<ffffffff814c9eb0>] tty_ldisc_ref_wait+0x20/0x50 #1: (&(&list->spinlock)->rlock){......}, at: [<ffffffffa01fdc07>] n_hdlc_tty_ioctl+0x127/0x1d0 [n_hdlc] ... Call Trace: ... [<ffffffff81738fd0>] _raw_spin_lock_irqsave+0x50/0x70 [<ffffffffa01fd020>] n_hdlc_buf_put+0x20/0x60 [n_hdlc] [<ffffffffa01fdc24>] n_hdlc_tty_ioctl+0x144/0x1d0 [n_hdlc] [<ffffffff814c25c1>] tty_ioctl+0x3f1/0xe40 ... Fix it by initializing the spin_locks separately. This removes also reduntand memset of a freshly kzallocated space. Signed-off-by: Jiri Slaby <jslaby@xxxxxxx> Reported-by: Dmitry Vyukov <dvyukov@xxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- drivers/tty/n_hdlc.c | 19 ++++--------------- 1 file changed, 4 insertions(+), 15 deletions(-) --- a/drivers/tty/n_hdlc.c +++ b/drivers/tty/n_hdlc.c @@ -159,7 +159,6 @@ struct n_hdlc { /* * HDLC buffer list manipulation functions */ -static void n_hdlc_buf_list_init(struct n_hdlc_buf_list *list); static void n_hdlc_buf_put(struct n_hdlc_buf_list *list, struct n_hdlc_buf *buf); static struct n_hdlc_buf *n_hdlc_buf_get(struct n_hdlc_buf_list *list); @@ -853,10 +852,10 @@ static struct n_hdlc *n_hdlc_alloc(void) if (!n_hdlc) return NULL; - n_hdlc_buf_list_init(&n_hdlc->rx_free_buf_list); - n_hdlc_buf_list_init(&n_hdlc->tx_free_buf_list); - n_hdlc_buf_list_init(&n_hdlc->rx_buf_list); - n_hdlc_buf_list_init(&n_hdlc->tx_buf_list); + spin_lock_init(&n_hdlc->rx_free_buf_list.spinlock); + spin_lock_init(&n_hdlc->tx_free_buf_list.spinlock); + spin_lock_init(&n_hdlc->rx_buf_list.spinlock); + spin_lock_init(&n_hdlc->tx_buf_list.spinlock); /* allocate free rx buffer list */ for(i=0;i<DEFAULT_RX_BUF_COUNT;i++) { @@ -885,16 +884,6 @@ static struct n_hdlc *n_hdlc_alloc(void) } /* end of n_hdlc_alloc() */ /** - * n_hdlc_buf_list_init - initialize specified HDLC buffer list - * @list - pointer to buffer list - */ -static void n_hdlc_buf_list_init(struct n_hdlc_buf_list *list) -{ - memset(list, 0, sizeof(*list)); - spin_lock_init(&list->spinlock); -} /* end of n_hdlc_buf_list_init() */ - -/** * n_hdlc_buf_put - add specified HDLC buffer to tail of specified list * @list - pointer to buffer list * @buf - pointer to buffer Patches currently in stable-queue which might be from jslaby@xxxxxxx are queue-4.4/tty-n_hdlc-fix-lockdep-false-positive.patch