On Tue, 2013-02-05 at 15:20 -0500, Peter Hurley wrote: > The tty core relies on the ldisc layer for synchronizing destruction > of the tty. Instead, the final tty release must wait for any pending tty > work to complete prior to tty destruction. > > Signed-off-by: Peter Hurley <peter@xxxxxxxxxxxxxxxxxx> > --- > drivers/tty/tty_io.c | 17 +++++++++++++++++ > drivers/tty/tty_ldisc.c | 24 ++++-------------------- > 2 files changed, 21 insertions(+), 20 deletions(-) ... > diff --git a/drivers/tty/tty_ldisc.c b/drivers/tty/tty_ldisc.c > index e0fdfec..c2837b2 100644 > --- a/drivers/tty/tty_ldisc.c > +++ b/drivers/tty/tty_ldisc.c > @@ -499,18 +499,6 @@ static void tty_ldisc_restore(struct tty_struct *tty, struct tty_ldisc *old) > } > > /** > - * tty_ldisc_flush_works - flush all works of a tty > - * @tty: tty device to flush works for > - * > - * Sync flush all works belonging to @tty. > - */ > -static void tty_ldisc_flush_works(struct tty_struct *tty) > -{ > - flush_work(&tty->SAK_work); > - flush_work(&tty->hangup_work); > -} > - > -/** > * tty_ldisc_wait_idle - wait for the ldisc to become idle > * @tty: tty to wait for > * @timeout: for how long to wait at most > @@ -726,13 +714,13 @@ int tty_set_ldisc(struct tty_struct *tty, int ldisc) > retval = tty_ldisc_halt(tty, o_tty, &work, &o_work, 5 * HZ); > > /* > - * Wait for ->hangup_work to terminate. > + * Wait for hangup to complete, if pending. > * We must drop the mutex here in case a hangup is also in process. > */ > > mutex_unlock(&tty->ldisc_mutex); > > - tty_ldisc_flush_works(tty); > + flush_work(&tty->hangup_work); Careful review will note that I dropped waiting for SAK. That's because it makes no sense to wait for SAK_work here -- ie., while setting a new ldisc. The SAK work can just as easily run at the completion of tty_set_ldisc() at tty_unlock(). I believe this is an artifact of the formerly shared code. But maybe I should note that in the commit message? -- To unsubscribe from this list: send the line "unsubscribe linux-serial" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html