4.15-stable review patch. If anyone has any objections, please let me know. ------------------ From: Michael S. Tsirkin <mst@xxxxxxxxxx> commit c61611f70958d86f659bca25c02ae69413747a8d upstream. get_user_pages_fast is supposed to be a faster drop-in equivalent of get_user_pages. As such, callers expect it to return a negative return code when passed an invalid address, and never expect it to return 0 when passed a positive number of pages, since its documentation says: * Returns number of pages pinned. This may be fewer than the number * requested. If nr_pages is 0 or negative, returns 0. If no pages * were pinned, returns -errno. When get_user_pages_fast fall back on get_user_pages this is exactly what happens. Unfortunately the implementation is inconsistent: it returns 0 if passed a kernel address, confusing callers: for example, the following is pretty common but does not appear to do the right thing with a kernel address: ret = get_user_pages_fast(addr, 1, writeable, &page); if (ret < 0) return ret; Change get_user_pages_fast to return -EFAULT when supplied a kernel address to make it match expectations. All callers have been audited for consistency with the documented semantics. Link: http://lkml.kernel.org/r/1522962072-182137-4-git-send-email-mst@xxxxxxxxxx Fixes: 5b65c4677a57 ("mm, x86/mm: Fix performance regression in get_user_pages_fast()") Signed-off-by: Michael S. Tsirkin <mst@xxxxxxxxxx> Reported-by: syzbot+6304bf97ef436580fede@xxxxxxxxxxxxxxxxxxxxxxxxx Reviewed-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> Cc: Huang Ying <ying.huang@xxxxxxxxx> Cc: Jonathan Corbet <corbet@xxxxxxx> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Cc: Thorsten Leemhuis <regressions@xxxxxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- mm/gup.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) --- a/mm/gup.c +++ b/mm/gup.c @@ -1816,9 +1816,12 @@ int get_user_pages_fast(unsigned long st len = (unsigned long) nr_pages << PAGE_SHIFT; end = start + len; + if (nr_pages <= 0) + return 0; + if (unlikely(!access_ok(write ? VERIFY_WRITE : VERIFY_READ, (void __user *)start, len))) - return 0; + return -EFAULT; if (gup_fast_permitted(start, nr_pages, write)) { local_irq_disable();