Re: [RFC PATCH] mm: move xa forward when run across zombie page

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Oct 26, 2022 at 05:01:57PM +0100, Matthew Wilcox wrote:
> On Thu, Oct 20, 2022 at 10:52:14PM +0100, Matthew Wilcox wrote:
> > But I think the tests you've done refute that theory.  I'm all out of
> > ideas at the moment.
> 
> I have a new idea.  In page_cache_delete_batch(), we don't set the
> order of the entry before calling xas_store().  That means we can end
> up in a situation where we have an order-2 folio in the page cache,
> delete it and end up with a NULL pointer at (say) index 20 and sibling
> entries at indices 21-23.  We can come along (potentially much later)
> and put an order-0 folio back at index 20.  Now all of indices 20-23
> point to the index-20, order-0 folio.  Worse, the xarray node can be
> freed with the sibling entries still intact and then be reallocated by
> an entirely different xarray.
> 
> I don't know if this is going to fix the problem you're seeing.  I can't
> quite draw a line from this situation to your symptoms.  I came across
> it while auditing all the places which set folio->mapping to NULL.
> I did notice a mis-ordering; all the other places first remove the folio
> from the xarray before setting folio to NULL, but I have a hard time
> connecting that to your symptoms either.
> 
> diff --git a/include/linux/xarray.h b/include/linux/xarray.h
> index 44dd6d6e01bc..cc1fd1f849a7 100644
> --- a/include/linux/xarray.h
> +++ b/include/linux/xarray.h
> @@ -1617,6 +1617,12 @@ static inline void xas_advance(struct xa_state *xas, unsigned long index)
>  	xas->xa_offset = (index >> shift) & XA_CHUNK_MASK;
>  }
>  
> +static inline void xas_adjust_order(struct xa_state *xas, unsigned int order)
> +{
> +	xas->xa_shift = order - (order % XA_CHUNK_SHIFT);
> +	xas->xa_sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1;
> +}
> +
>  /**
>   * xas_set_order() - Set up XArray operation state for a multislot entry.
>   * @xas: XArray operation state.
> @@ -1628,8 +1634,7 @@ static inline void xas_set_order(struct xa_state *xas, unsigned long index,
>  {
>  #ifdef CONFIG_XARRAY_MULTI
>  	xas->xa_index = order < BITS_PER_LONG ? (index >> order) << order : 0;
> -	xas->xa_shift = order - (order % XA_CHUNK_SHIFT);
> -	xas->xa_sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1;
> +	xas_adjust_order(xas, order);
>  	xas->xa_node = XAS_RESTART;
>  #else
>  	BUG_ON(order > 0);
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 08341616ae7a..6e3f486131e4 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -305,11 +305,13 @@ static void page_cache_delete_batch(struct address_space *mapping,
>  
>  		WARN_ON_ONCE(!folio_test_locked(folio));
>  
> +		if (!folio_test_hugetlb(folio))
> +			xas_adjust_order(&xas, folio_order(folio));
> +		xas_store(&xas, NULL);
>  		folio->mapping = NULL;
>  		/* Leave folio->index set: truncation lookup relies on it */
>  
>  		i++;
> -		xas_store(&xas, NULL);
>  		total_pages += folio_nr_pages(folio);
>  	}
>  	mapping->nrpages -= total_pages;

Nope, that ain't it. I think I've got the data corruption fix sorted
now (at least, g/270 isn't assert failing on stray delalloc extents
anymore), so if that's the case, I can spend some time actively
trying to track this down....

-Dave.
-- 
Dave Chinner
david@xxxxxxxxxxxxx



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux