On Sun, Jun 4, 2017 at 3:06 PM, Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote: > > Now that I'm hopefully sufficiently awake... Folks, could you try this: Ok, that looks believable, but complex. So it does I wonder if it's worth it, particularly considering that we don't really have a maintainer, and it took people this long to even notice that huge glaring 2GB limitat. In fact, once we raise it past the 2GB limit, most of the s_maxbytes reasons go away - we will already be passing around values that have the high bit set in "int", and one of the main reasons for x_maxbyte was to limit overflow damage in filesystem code that passed "int" around where they shouldn't. So assuming we trust UFS doesn't do that (and considering that it uses the default VFS helpers for reading etc, it's presumably all good), we might as well just use the MAX_LFS_FILESIZE define. It's not as if we need to get s_maxbytes exactly right. All we *really* care about is to get the LFS case ok for code that is limited to 31 bits, and to not overflow the page index when we use the page cache (which MAX_LFS_FILESIZE does already). Past that, any extra precision can avoid a few unnecessary calls down to the filesystem (ie not bother to do extra readpage calls for cases we know aren't relevant), but it shouldn't be a big deal. Linus