On Thu, Mar 10, 2022 at 8:59 AM Dan Williams <dan.j.williams@xxxxxxxxx> wrote: > > On Wed, Mar 2, 2022 at 12:30 AM Muchun Song <songmuchun@xxxxxxxxxxxxx> wrote: > > > > Currently dax_mapping_entry_mkclean() fails to clean and write protect > > the pte entry within a DAX PMD entry during an *sync operation. This > > can result in data loss in the following sequence: > > > > 1) process A mmap write to DAX PMD, dirtying PMD radix tree entry and > > making the pmd entry dirty and writeable. > > 2) process B mmap with the @offset (e.g. 4K) and @length (e.g. 4K) > > write to the same file, dirtying PMD radix tree entry (already > > done in 1)) and making the pte entry dirty and writeable. > > 3) fsync, flushing out PMD data and cleaning the radix tree entry. We > > currently fail to mark the pte entry as clean and write protected > > since the vma of process B is not covered in dax_entry_mkclean(). > > 4) process B writes to the pte. These don't cause any page faults since > > the pte entry is dirty and writeable. The radix tree entry remains > > clean. > > 5) fsync, which fails to flush the dirty PMD data because the radix tree > > entry was clean. > > 6) crash - dirty data that should have been fsync'd as part of 5) could > > still have been in the processor cache, and is lost. > > Excellent description. > > > > > Just to use pfn_mkclean_range() to clean the pfns to fix this issue. > > So the original motivation for CONFIG_FS_DAX_LIMITED was for archs > that do not have spare PTE bits to indicate pmd_devmap(). So this fix > can only work in the CONFIG_FS_DAX_LIMITED=n case and in that case it > seems you can use the current page_mkclean_one(), right? I don't know the history of CONFIG_FS_DAX_LIMITED. page_mkclean_one() need a struct page associated with the pfn, do the struct pages exist when CONFIG_FS_DAX_LIMITED and ! FS_DAX_PMD? If yes, I think you are right. But I don't see this guarantee. I am not familiar with DAX code, so what am I missing here? Thanks.