On Thu, Sep 08, 2022 at 11:48:09PM +0300, Sergei Antonov wrote: > Running this test program on ARMv4 a few times (sometimes just once) > reproduces the bug. > > int main() > { > unsigned i; > char paragon[SIZE]; > void* ptr; > > memset(paragon, 0xAA, SIZE); > ptr = mmap(NULL, SIZE, PROT_READ | PROT_WRITE, > MAP_ANON | MAP_SHARED, -1, 0); > if (ptr == MAP_FAILED) return 1; > printf("ptr = %p\n", ptr); > for (i=0;i<10000;i++){ > memset(ptr, 0xAA, SIZE); > if (memcmp(ptr, paragon, SIZE)) { > printf("Unexpected bytes on iteration %u!!!\n", i); > break; > } > } > munmap(ptr, SIZE); > } > > In the "ptr" buffer there appear runs of zero bytes which are aligned > by 16 and their lengths are multiple of 16. > > Linux v5.11 does not have the bug, "git bisect" finds the first bad commit: > f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths") > > Before the commit update_mmu_cache() was called during a call to > filemap_map_pages() as well as finish_fault(). After the commit > finish_fault() lacks it. > > Bring back update_mmu_cache() to finish_fault() to fix the bug. > Also call update_mmu_tlb() only when returning VM_FAULT_NOPAGE to more > closely reproduce the code of alloc_set_pte() function that existed before > the commit. > > On many platforms update_mmu_cache() is nop: > x86, see arch/x86/include/asm/pgtable > ARMv6+, see arch/arm/include/asm/tlbflush.h > So, it seems, few users ran into this bug. > > Fixes: f9ce0be71d1f ("mm: Cleanup faultaround and finish_fault() codepaths") > Signed-off-by: Sergei Antonov <saproj@xxxxxxxxx> > Cc: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx> > --- > mm/memory.c | 14 ++++++++++---- > 1 file changed, 10 insertions(+), 4 deletions(-) > <formletter> This is not the correct way to submit patches for inclusion in the stable kernel tree. Please read: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html for how to do this properly. </formletter>