On Sat, Sep 24, 2016 at 5:59 AM, Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote: > * splice_to_pipe() stops at pipe overflow and does *not* take pipe_lock > * ->splice_read() instances do the same > * vmsplice_to_pipe() and do_splice() (ultimate callers of splice_to_pipe()) > arrange for waiting, looping, etc. themselves. > > That should make pipe_lock the outermost one. > > Unfortunately, existing rules for the amount passed by vmsplice_to_pipe() > and do_splice() are quite ugly _and_ userland code can be easily broken > by changing those. It's not even "no more than the maximal capacity of > this pipe" - it's "once we'd fed pipe->nr_buffers pages into the pipe, > leave instead of waiting". > > Considering how poorly these rules are documented, let's try "wait for some > space to appear, unless given SPLICE_F_NONBLOCK, then push into pipe > and if we run into overflow, we are done". > > Signed-off-by: Al Viro <viro@xxxxxxxxxxxxxxxxxx> > --- > fs/fuse/dev.c | 2 - > fs/splice.c | 138 +++++++++++++++++++++++++++------------------------------- > 2 files changed, 63 insertions(+), 77 deletions(-) > [...] > @@ -1546,14 +1528,20 @@ static long vmsplice_to_pipe(struct file *file, const struct iovec __user *uiov, > return -ENOMEM; > } > > - spd.nr_pages = get_iovec_page_array(&from, spd.pages, > - spd.partial, > - spd.nr_pages_max); > - if (spd.nr_pages <= 0) > - ret = spd.nr_pages; > - else > - ret = splice_to_pipe(pipe, &spd); > - > + pipe_lock(pipe); > + ret = wait_for_space(pipe, flags); > + if (!ret) { > + spd.nr_pages = get_iovec_page_array(&from, spd.pages, > + spd.partial, > + spd.nr_pages_max); > + if (spd.nr_pages <= 0) > + ret = spd.nr_pages; > + else > + ret = splice_to_pipe(pipe, &spd); > + pipe_unlock(pipe); > + if (ret > 0) > + wakeup_pipe_readers(pipe); > + } Unbalanced pipe_lock()? Also, while it doesn't hurt, the constification of the "from" argument of get_iovec_page_array() looks only noise in this patch. Thanks, Miklos -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html