On 13/12/2023 07:21, Dan Carpenter wrote: > On Thu, Dec 07, 2023 at 04:12:05PM +0000, Ryan Roberts wrote: >> @@ -4176,10 +4260,15 @@ static vm_fault_t do_anonymous_page(struct vm_fault *vmf) >> /* Allocate our own private page. */ >> if (unlikely(anon_vma_prepare(vma))) >> goto oom; >> - folio = vma_alloc_zeroed_movable_folio(vma, vmf->address); >> + folio = alloc_anon_folio(vmf); >> + if (IS_ERR(folio)) >> + return 0; >> if (!folio) >> goto oom; > > Returning zero is weird. I think it should be a vm_fault_t code. It's the same pattern that the existing code a little further down this function already implements: vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, addr, &vmf->ptl); if (!vmf->pte) goto release; If we fail to map/lock the pte (due to a race), then we return 0 to allow user space to rerun the faulting instruction and cause the fault to happen again. The above code ends up calling "return ret;" and ret is 0. > > This mixing of error pointers and NULL is going to cause problems. > Normally when we have a mix of error pointers and NULL then the NULL is > not an error but instead means that the feature has been deliberately > turned off. I'm unable to figure out what the meaning is here. There are 3 conditions that the function can return: - folio successfully allocated - folio failed to be allocated due to OOM - fault needs to be tried again due to losing race Previously only the first 2 conditions were possible and they were indicated by NULL/not-NULL. The new 3rd condition is only possible when THP is compile-time enabled. So it keeps the logic simpler to keep the NULL/not-NULL distinction for the first 2, and use the error code for the final one. There are IS_ERR() and IS_ERR_OR_NULL() variants so I assume a pattern where you can have pointer, error or NULL is somewhat common already? Thanks, Ryan > > It should return one or the other, or if it's a mix then add a giant > comment explaining what they mean. > > regards, > dan carpenter > >> >> + nr_pages = folio_nr_pages(folio); >> + addr = ALIGN_DOWN(vmf->address, nr_pages * PAGE_SIZE); >> + >> if (mem_cgroup_charge(folio, vma->vm_mm, GFP_KERNEL)) >> goto oom_free_page; >> folio_throttle_swaprate(folio, GFP_KERNEL); >