Re: [PATCH -next v2 11/26] tty: Don't release tty locks for wait queue sanity check

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Nov 05, 2014 at 12:12:54PM -0500, Peter Hurley wrote:
> Releasing the tty locks while waiting for the tty wait queues to
> be empty is no longer necessary nor desirable. Prior to
> "tty: Don't take tty_mutex for tty count changes", dropping the
> tty locks was necessary to reestablish the correct lock order between
> tty_mutex and the tty locks. Dropping the global tty_mutex was necessary;
> otherwise new ttys could not have been opened while waiting.
> 
> However, without needing the global tty_mutex held, the tty locks for
> the releasing tty can now be held through the sleep. The sanity check
> is for abnormal conditions caused by kernel bugs, not for recoverable
> errors caused by misbehaving userspace; dropping the tty locks only
> allows the tty state to get more sideways.
> 
> Reviewed-by: Alan Cox <alan@xxxxxxxxxxxxxxx>
> Signed-off-by: Peter Hurley <peter@xxxxxxxxxxxxxxxxxx>
> ---
>  drivers/tty/tty_io.c | 8 ++------
>  1 file changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
> index e59de81c39a9..b008e2b38d54 100644
> --- a/drivers/tty/tty_io.c
> +++ b/drivers/tty/tty_io.c
> @@ -1798,13 +1798,10 @@ int tty_release(struct inode *inode, struct file *filp)
>  	 * first, its count will be one, since the master side holds an open.
>  	 * Thus this test wouldn't be triggered at the time the slave closes,
>  	 * so we do it now.
> -	 *
> -	 * Note that it's possible for the tty to be opened again while we're
> -	 * flushing out waiters.  By recalculating the closing flags before
> -	 * each iteration we avoid any problems.
>  	 */
> +	tty_lock_pair(tty, o_tty);
> +
>  	while (1) {
> -		tty_lock_pair(tty, o_tty);
>  		tty_closing = tty->count <= 1;
>  		o_tty_closing = o_tty &&
>  			(o_tty->count <= (pty_master ? 1 : 0));
> @@ -1835,7 +1832,6 @@ int tty_release(struct inode *inode, struct file *filp)
>  
>  		printk(KERN_WARNING "%s: %s: read/write wait queue active!\n",
>  				__func__, tty_name(tty, buf));
> -		tty_unlock_pair(tty, o_tty);
>  		schedule();
>  	}
>  

This patch had the same type of fuzz as the previous one, the version I
used was:


diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
index e59de81c39a9..b008e2b38d54 100644
--- a/drivers/tty/tty_io.c
+++ b/drivers/tty/tty_io.c
@@ -1798,13 +1798,10 @@ int tty_release(struct inode *inode, struct file *filp)
 	 * first, its count will be one, since the master side holds an open.
 	 * Thus this test wouldn't be triggered at the time the slave closes,
 	 * so we do it now.
-	 *
-	 * Note that it's possible for the tty to be opened again while we're
-	 * flushing out waiters.  By recalculating the closing flags before
-	 * each iteration we avoid any problems.
 	 */
+	tty_lock_pair(tty, o_tty);
+
 	while (1) {
-		tty_lock_pair(tty, o_tty);
 		tty_closing = tty->count <= 1;
 		o_tty_closing = o_tty &&
 			(o_tty->count <= (pty_master ? 1 : 0));
@@ -1835,7 +1832,6 @@ int tty_release(struct inode *inode, struct file *filp)
 
 		printk(KERN_WARNING "%s: %s: read/write wait queue active!\n",
 				__func__, tty_name(tty, buf));
-		tty_unlock_pair(tty, o_tty);
 		schedule();
 	}
 
--
To unsubscribe from this list: send the line "unsubscribe linux-serial" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux PPP]     [Linux FS]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Linmodem]     [Device Mapper]     [Linux Kernel for ARM]

  Powered by Linux