On 05/08/2016 03:16 PM, Anthony Romano wrote:
When fallocate is interrupted it will undo a range that extends one byte past its range of allocated pages. This can corrupt an in-use page by zeroing out its first byte. Instead, undo using the inclusive byte range.
Huh, good catch. So why is shmem_undo_range() adding +1 to the value in the first place? The only other caller is shmem_truncate_range() and all *its* callers do subtract 1 to avoid the same issue. So a nicer fix would be to remove all this +1/-1 madness. Or is there some subtle corner case I'm missing?
Signed-off-by: Anthony Romano <anthony.romano@xxxxxxxxxx>
Looks like a stable candidate patch. Can you point out the commit that introduced the bug, for the Fixes: tag?
Thanks, Vlastimil
--- mm/shmem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/shmem.c b/mm/shmem.c index 719bd6b..f0f9405 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2238,7 +2238,7 @@ static long shmem_fallocate(struct file *file, int mode, loff_t offset, /* Remove the !PageUptodate pages we added */ shmem_undo_range(inode, (loff_t)start << PAGE_SHIFT, - (loff_t)index << PAGE_SHIFT, true); + ((loff_t)index << PAGE_SHIFT) - 1, true); goto undone; }
-- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>