Re: [PATCH] iomap: set page dirty after partial delalloc on mkwrite

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 28, 2018 at 01:39:56PM -0400, Brian Foster wrote:
> The iomap page fault mechanism currently dirties the associated page
> after the full block range of the page has been allocated. This
> leaves the page susceptible to delayed allocations without ever
> being set dirty on sub-page block sized filesystems.
> 
> For example, consider a page fault on a page with one preexisting
> real (non-delalloc) block allocated in the middle of the page. The
> first iomap_apply() iteration performs delayed allocation on the
> range up to the preexisting block, the next iteration finds the
> preexisting block, and the last iteration attempts to perform
> delayed allocation on the range after the prexisting block to the
> end of the page. If the first allocation succeeds and the final
> allocation fails with -ENOSPC, iomap_apply() returns the error and
> iomap_page_mkwrite() fails to dirty the page having already
> performed partial delayed allocation. This eventually results in the
> page being invalidated without ever converting the delayed
> allocation to real blocks.
> 
> This problem is reliably reproduced by generic/083 on XFS on ppc64
> systems (64k page size, 4k block size). It results in leaked
> delalloc blocks on inode reclaim, which triggers an assert failure
> in xfs_fs_destroy_inode() and filesystem accounting inconsistency.
> 
> Move the set_page_dirty() call from iomap_page_mkwrite() to the
> actor callback, similar to how the buffer head implementation works.
> The actor callback is called iff ->iomap_begin() returns success, so
> ensures the page is dirtied as soon as possible after an allocation.
> 
> Signed-off-by: Brian Foster <bfoster@xxxxxxxxxx>

Looks sensible to me.

Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx>

> ---
> 
> This patch addresses the problem with generic/083, but I'm still in the
> process of running broader testing. I wanted to get it on the list for
> review in the meantime...

It's been fine on my weekend test runs in the my candidate fixes for
4.19 branch. There have been no regressions for v4 512b and v5 1k
block size test runs from it - I got 14 complete fstests runs of sub
page block size configs across multiple test machines over the
weekend, so I've pushed it with that branch.

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux