Re: [PATCH] mm/migrate: fix deadlock in migrate_pages_batch() on large folios

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

On 2024/7/29 05:46, Matthew Wilcox wrote:
On Sun, Jul 28, 2024 at 11:49:13PM +0800, Gao Xiang wrote:
It was found by compaction stress test when I explicitly enable EROFS
compressed files to use large folios, which case I cannot reproduce with
the same workload if large folio support is off (current mainline).
Typically, filesystem reads (with locked file-backed folios) could use
another bdev/meta inode to load some other I/Os (e.g. inode extent
metadata or caching compressed data), so the locking order will be:

Umm.  That is a new constraint to me.  We have two other places which
take the folio lock in a particular order.  Writeback takes locks on
folios belonging to the same inode in ascending ->index order.  It
submits all the folios for write before moving on to lock other inodes,
so it does not conflict with this new constraint you're proposing.

BTW, I don't believe it's a new order out of EROFS, if you consider
ext4 or ext2 for example, it will also use sb_bread() (buffer heads
on bdev inode to trigger some meta I/Os),

e.g. take ext2 for simplicity:
  ext2_readahead
    mpage_readahead
     ext2_get_block
       ext2_get_blocks
         ext2_get_branch
            sb_bread     <-- get some metadata using for this data I/O


The other place is remap_file_range().  Both inodes in that case must be
regular files,
         if (!S_ISREG(inode_in->i_mode) || !S_ISREG(inode_out->i_mode))
                 return -EINVAL;
so this new rule is fine.

Does anybody know of any _other_ ordering constraints on folio locks?  I'm
willing to write them down ...

Personally I don't think out any particular order between two folio
locks acrossing different inodes, so I think folio batching locking
always needs to be taken care.


diff --git a/mm/migrate.c b/mm/migrate.c
index 20cb9f5f7446..a912e4b83228 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1483,7 +1483,8 @@ static inline int try_split_folio(struct folio *folio, struct list_head *split_f
  {
  	int rc;
- folio_lock(folio);
+	if (!folio_trylock(folio))
+		return -EAGAIN;
  	rc = split_folio_to_list(folio, split_folios);
  	folio_unlock(folio);
  	if (!rc)

This feels like the best quick fix to me since migration is going to
walk the folios in a different order from writeback.  I'm surprised
this hasn't already bitten us, to be honest.

My stress workload explicitly triggers compaction and other EROFS
read loads, I'm not sure if others just test like this too, but:
https://lore.kernel.org/r/20240418001356.95857-1-mcgrof@xxxxxxxxxx

seems like a similar load.

Thanks,
Gao Xiang


(ie I don't think this is even necessarily connected to the new
ordering constraint; I think migration and writeback can already
deadlock)




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux