The code for shmem_undo_range is very similar to truncate_inode_pages_range so I assume that's why it's using an inclusive range.
It appears the bug was introduced in 1635f6a74152f1dcd1b888231609d64875f0a81a
It appears the bug was introduced in 1635f6a74152f1dcd1b888231609d64875f0a81a
On Mon, May 16, 2016 at 4:59 AM, Vlastimil Babka <vbabka@xxxxxxx> wrote:
On 05/08/2016 03:16 PM, Anthony Romano wrote:
When fallocate is interrupted it will undo a range that extends one byte
past its range of allocated pages. This can corrupt an in-use page by
zeroing out its first byte. Instead, undo using the inclusive byte range.
Huh, good catch. So why is shmem_undo_range() adding +1 to the value in the first place? The only other caller is shmem_truncate_range() and all *its* callers do subtract 1 to avoid the same issue. So a nicer fix would be to remove all this +1/-1 madness. Or is there some subtle corner case I'm missing?
Signed-off-by: Anthony Romano <anthony.romano@xxxxxxxxxx>
Looks like a stable candidate patch. Can you point out the commit that introduced the bug, for the Fixes: tag?
Thanks,
Vlastimil
---
mm/shmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 719bd6b..f0f9405 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2238,7 +2238,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset,
/* Remove the !PageUptodate pages we added */
shmem_undo_range(inode,
(loff_t)start << PAGE_SHIFT,
- (loff_t)index << PAGE_SHIFT, true);
+ ((loff_t)index << PAGE_SHIFT) - 1, true);
goto undone;
}