On 7/25/23 14:51, Matthew Wilcox wrote: > On Tue, Jul 25, 2023 at 01:24:03PM +0300, Kirill A . Shutemov wrote: >> On Tue, Jul 18, 2023 at 04:44:53PM -0700, Sean Christopherson wrote: >> > diff --git a/mm/compaction.c b/mm/compaction.c >> > index dbc9f86b1934..a3d2b132df52 100644 >> > --- a/mm/compaction.c >> > +++ b/mm/compaction.c >> > @@ -1047,6 +1047,10 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, >> > if (!mapping && (folio_ref_count(folio) - 1) > folio_mapcount(folio)) >> > goto isolate_fail_put; >> > >> > + /* The mapping truly isn't movable. */ >> > + if (mapping && mapping_unmovable(mapping)) >> > + goto isolate_fail_put; >> > + >> >> I doubt that it is safe to dereference mapping here. I believe the folio >> can be truncated from under us and the mapping freed with the inode. >> >> The folio has to be locked to dereference mapping safely (given that the >> mapping is still tied to the folio). > > There's even a comment to that effect later on in the function: > > /* > * Only pages without mappings or that have a > * ->migrate_folio callback are possible to migrate > * without blocking. However, we can be racing with > * truncation so it's necessary to lock the page > * to stabilise the mapping as truncation holds > * the page lock until after the page is removed > * from the page cache. > */ > > (that could be reworded to make it clear how dangerous dereferencing > ->mapping is without the lock ... and it does need to be changed to say > "folio lock" instead of "page lock", so ...) > > How does this look? > > /* > * Only folios without mappings or that have > * a ->migrate_folio callback are possible to > * migrate without blocking. However, we can > * be racing with truncation, which can free > * the mapping. Truncation holds the folio lock > * until after the folio is removed from the page > * cache so holding it ourselves is sufficient. > */ Incorporated to my attempt at a fix (posted separately per the requested process): https://lore.kernel.org/all/20230901082025.20548-2-vbabka@xxxxxxx/