On Sun, Jul 16, 2023 at 6:58 PM Yin Fengwei <fengwei.yin@xxxxxxxxx> wrote: > > > > On 7/17/23 08:35, Yu Zhao wrote: > > On Sun, Jul 16, 2023 at 6:00 PM Yin, Fengwei <fengwei.yin@xxxxxxxxx> wrote: > >> > >> On 7/15/2023 2:06 PM, Yu Zhao wrote: > >>> There is a problem here that I didn't have the time to elaborate: we > >>> can't mlock() a folio that is within the range but not fully mapped > >>> because this folio can be on the deferred split queue. When the split > >>> happens, those unmapped folios (not mapped by this vma but are mapped > >>> into other vmas) will be stranded on the unevictable lru. > >> > >> This should be fine unless I missed something. During large folio split, > >> the unmap_folio() will be migrate(anon)/unmap(file) folio. Folio will be > >> munlocked in unmap_folio(). So the head/tail pages will be evictable always. > > > > It's close but not entirely accurate: munlock can fail on isolated folios. > Yes. The munlock just clear PG_mlocked bit but with PG_unevictable left. > > Could this also happen against normal 4K page? I mean when user try to munlock > a normal 4K page and this 4K page is isolated. So it become unevictable page? Looks like it can be possible. If cpu 1 is in __munlock_folio() and cpu 2 is isolating the folio for any purpose: cpu1 cpu2 isolate folio folio_test_clear_lru() // 0 putback folio // add to unevictable list folio_test_clear_mlocked() The page would be stranded on the unevictable list in this case, no? Maybe we should only try to isolate the page (clear PG_lru) after we possibly clear PG_mlocked? In this case if we fail to isolate we know for sure that whoever has the page isolated will observe that PG_mlocked is clear and correctly make the page evictable. This probably would be complicated with the current implementation, as we first need to decrement mlock_count to determine if we want to clear PG_mlocked, and to do so we need to isolate the page as mlock_count overlays page->lru. With the proposal in [1] to rework mlock_count, it might be much simpler as far as I can tell. I intend to refresh this proposal soon-ish. [1]https://lore.kernel.org/lkml/20230618065719.1363271-1-yosryahmed@xxxxxxxxxx/ > > > Regards > Yin, Fengwei >