On Mon, 14 Nov 2011 17:15:24 +0100 Jan Kara <jack@xxxxxxx> wrote: > Currently write(2) to a file is not interruptible by a signal. Sometimes this > is desirable (e.g. when you want to quickly kill a process hogging your disk or > when some process gets blocked in balance_dirty_pages() indefinitely due to a > filesystem being in an error condition). > > Reported-by: Kazuya Mio <k-mio@xxxxxxxxxxxxx> > Tested-by: Kazuya Mio <k-mio@xxxxxxxxxxxxx> > Signed-off-by: Jan Kara <jack@xxxxxxx> > --- > mm/filemap.c | 11 +++++++++-- > 1 files changed, 9 insertions(+), 2 deletions(-) > > diff --git a/mm/filemap.c b/mm/filemap.c > index c0018f2..166b30e 100644 > --- a/mm/filemap.c > +++ b/mm/filemap.c > @@ -2407,7 +2407,6 @@ static ssize_t generic_perform_write(struct file *file, > iov_iter_count(i)); > > again: > - > /* > * Bring in the user page that we will copy from _first_. > * Otherwise there's a nasty deadlock on copying from the > @@ -2463,7 +2462,15 @@ again: > written += copied; > > balance_dirty_pages_ratelimited(mapping); > - > + /* > + * We check the signal independently of balance_dirty_pages() > + * because we need not wait and check for signal there although > + * this loop could have taken significant amount of time... > + */ > + if (fatal_signal_pending(current)) { > + status = -EINTR; > + break; > + } > } while (iov_iter_count(i)); > > return written ? written : status; Will this permit the interruption of things like fsync() or sync()? If so, considerable pondering is needed. Also I worry about stuff like the use of buffered write to finish off loose ends in direct-IO writing. Sometimes these writes MUST complete, to prevent exposing unwritten disk blocks to a subsequent read. Will a well-timed ^C break this? If "no" then does this change introduce risk that we'll later accidentally introduce such a security hole? -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html