Re: 6.6.8 stable: crash in folio_mark_dirty

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 01, 2024 at 09:55:04AM +0800, Hillf Danton wrote:
> On Sun, 31 Dec 2023 13:07:03 +0000 Matthew Wilcox <willy@xxxxxxxxxxxxx>
> > On Sun, Dec 31, 2023 at 09:28:46AM +0800, Hillf Danton wrote:
> > > On Sat, Dec 30, 2023 at 10:23:26AM -0500 Genes Lists <lists@xxxxxxxxxxxx>
> > > > Apologies in advance, but I cannot git bisect this since machine was
> > > > running for 10 days on 6.6.8 before this happened.
> > > >
> > > > Dec 30 07:00:36 s6 kernel: ------------[ cut here ]------------
> > > > Dec 30 07:00:36 s6 kernel: WARNING: CPU: 0 PID: 521524 at mm/page-writeback.c:2668 __folio_mark_dirty (??:?) 
> > > > Dec 30 07:00:36 s6 kernel: CPU: 0 PID: 521524 Comm: rsync Not tainted 6.6.8-stable-1 #13 d238f5ab6a206cdb0cc5cd72f8688230f23d58df
> > > > Dec 30 07:00:36 s6 kernel: block_dirty_folio (??:?) 
> > > > Dec 30 07:00:36 s6 kernel: unmap_page_range (??:?) 
> > > > Dec 30 07:00:36 s6 kernel: unmap_vmas (??:?) 
> > > > Dec 30 07:00:36 s6 kernel: exit_mmap (??:?) 
> > > > Dec 30 07:00:36 s6 kernel: __mmput (??:?) 
> > > > Dec 30 07:00:36 s6 kernel: do_exit (??:?) 
> > > > Dec 30 07:00:36 s6 kernel: do_group_exit (??:?) 
> > > > Dec 30 07:00:36 s6 kernel: __x64_sys_exit_group (??:?) 
> > > > Dec 30 07:00:36 s6 kernel: do_syscall_64 (??:?) 
> > > 
> > > See what comes out if race is handled.
> > > Only for thoughts.
> > 
> > I don't think this can happen.  Look at the call trace;
> > block_dirty_folio() is called from unmap_page_range().  That means the
> > page is in the page tables.  We unmap the pages in a folio from the
> > page tables before we set folio->mapping to NULL.  Look at
> > invalidate_inode_pages2_range() for example:
> > 
> >                                 unmap_mapping_pages(mapping, indices[i],
> >                                                 (1 + end - indices[i]), false);
> >                         folio_lock(folio);
> >                         folio_wait_writeback(folio);
> >                         if (folio_mapped(folio))
> >                                 unmap_mapping_folio(folio);
> >                         BUG_ON(folio_mapped(folio));
> >                                 if (!invalidate_complete_folio2(mapping, folio))
> > 
> What is missed here is the same check [1] in invalidate_inode_pages2_range(),
> so I built no wheel.
> 
> 			folio_lock(folio);
> 			if (unlikely(folio->mapping != mapping)) {
> 				folio_unlock(folio);
> 				continue;
> 			}
> 
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/truncate.c#n658

That's entirely different.  That's checking in the truncate path whether
somebody else already truncated this page.  What I was showing was why
a page found through a page table walk cannot have been truncated (which
is actually quite interesting, because it's the page table lock that
prevents the race).




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux