On 08/07/2018 09:54 PM, Jan Kara wrote:
On Mon 06-08-18 15:59:27, Andrew Morton wrote:
On Mon, 6 Aug 2018 12:22:03 +0200 Jan Kara <jack@xxxxxxx> wrote:
On Fri 20-07-18 16:14:29, Andrew Morton wrote:
On Thu, 19 Jul 2018 10:58:12 +0200 Jan Kara <jack@xxxxxxx> wrote:
On Thu 19-07-18 16:17:26, Chengguang Xu wrote:
When we try to truncate read count in generic_file_buffered_read(),
should deliver (sb->s_maxbytes - offset) as maximum count not
sb->s_maxbytes itself.
Signed-off-by: Chengguang Xu <cgxu519@xxxxxxx>
Looks good to me. You can add:
Reviewed-by: Jan Kara <jack@xxxxxxx>
Yup.
What are the runtime effects of this bug?
Good question. I think ->readpage() could be called for index beyond
maximum file size supported by the filesystem leading to weird filesystem
behavior due to overflows in internal calculations.
Sure. But is it possible for userspace to trigger this behaviour?
Possibly all callers have already sanitized the arguments by this stage
in which case the statement is arguably redundant.
So I don't think there's any sanitization going on before
generic_file_buffered_read(). E.g. I don't see any s_maxbytes check on
ksys_read() -> vfs_read() -> __vfs_read() -> new_sync_read() ->
call_read_iter() -> generic_file_read_iter() ->
generic_file_buffered_read() path... However now thinking about this again:
We are guaranteed i_size is within s_maxbytes (places modifying i_size
are checking for this) and generic_file_buffered_read() stops when it
should read beyond i_size. So in the end I don't think there's any breakage
possible and the patch is not necessary?
I think most of time i_size is within s_maxbytes in local filesystem,
but consider network filesystem, write big file in 64bit client and
read in 32bit client, in this case maybe generic_file_buffered_read()
can read more than s_maxbytes, right?
Thanks,
Chengguang