On 8/23/17 3:37 PM, Linus Torvalds wrote:
On Wed, Aug 23, 2017 at 12:15 PM, Doug Nazar <nazard@xxxxxxxx> wrote:
The following commits cause short reads of block devices, however writes are
still allowed.
c2a9737f45e2 ("vfs,mm: fix a dead loop in truncate_inode_pages_range()")
d05c5f7ba164 ("vfs,mm: fix return value of read() at s_maxbytes")
When e2fsck sees this, it thinks it's a bad sector and tries to write a
block of nulls which overwrites the valid data.
Hmm. Block devices shouldn't have issues with s_maxbytes, and I'm
surprised that nobody has seen that before.
Device is LVM over 2 x RAID-5 on an old 32bit desktop.
RO RA SSZ BSZ StartSec Size Device
rw 4096 512 4096 0 9748044840960 /dev/Storage/Main
.. and the problem may be as simple as just a missing initialization
of s_maxbytes for blockdev_superblock.
Does the attcahed trivial one-liner fix things for you?
Al, if it really is this simple, how come nobody even noticed?
Also, I do wonder if that check in do_generic_file_read() should just
unconditionally use MAX_LFS_FILESIZE, since the whole point there is
really about the index wrap-around, not about any underlying
filesystem limits per se.
And that's exactly what MAX_LFS_FILESIZE is - the maximum size that
fits in the page index.
It's compiling now, but I think it's already set to MAX_LFS_FILESIZE.
[ 169.095127] ppos=80180006000, s_maxbytes=7ffffffffff,
magic=0x62646576, type=bdev
Doug