On Fri, Sep 06, 2019 at 05:56:18AM +1000, Aleksa Sarai wrote: > On 2019-09-05, Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote: > > On Thu, Sep 05, 2019 at 08:23:03PM +0200, Christian Brauner wrote: > > > > > Because every caller of that function right now has that limit set > > > anyway iirc. So we can either remove it from here and place it back for > > > the individual callers or leave it in the helper. > > > Also, I'm really asking, why not? Is it unreasonable to have an upper > > > bound on the size (for a long time probably) or are you disagreeing with > > > PAGE_SIZE being used? PAGE_SIZE limit is currently used by sched, perf, > > > bpf, and clone3 and in a few other places. > > > > For a primitive that can be safely used with any size (OK, any within > > the usual 2Gb limit)? Why push the random policy into the place where > > it doesn't belong? > > > > Seriously, what's the point? If they want to have a large chunk of > > userland memory zeroed or checked for non-zeroes - why would that > > be a problem? > > Thinking about it some more, there isn't really any r/w amplification -- > so there isn't much to gain by passing giant structs. Though, if we are > going to permit 2GB buffers, isn't that also an argument to use > memchr_inv()? :P I'm not sure I understand the last bit. If you look at what copy_from_user() does on misaligned source/destination, especially on architectures that really, really do not like unaligned access... Case in point: alpha (and it's not unusual in that respect). What it boils down to is copy bytes until the destination is aligned if source and destination are both aligned copy word by word else read word by word, storing the mix of two adjacent words copy the rest byte by byte The unpleasant case (to and from having different remainders modulo 8) is basically if (count >= 8) { u64 *aligned = (u64 *)(from & ~7); u64 *dest = (u64 *)to; int bitshift = (from & 7) * 8; u64 prev, next; prev = aligned[0]; do { next = aligned[1]; prev <<= bitshift; prev |= next >> (64 - bitshift); *dest++ = prev; aligned++; prev = next; from += 8; to += 8; count -= 8; } while (count >= 8); } Now, mix that with "... and do memchr_inv() on the copy to find if we'd copied any non-zeroes, nevermind where" and it starts looking really ridiculous. We should just read the fscking source, aligned down to word boundary and check each word being read. The first and the last ones - masked. All there is to it. On almost all architectures that'll work well enough; s390 might want something more elaborate (there even word-by-word copies are costly, but I'd suggest talking to them for details). Something like bool all_zeroes_user(const void __user *p, size_t count) would probably be a sane API...