Quoting Kuo-Hsin Yang (2018-10-31 08:19:45) > The i915 driver uses shmemfs to allocate backing storage for gem > objects. These shmemfs pages can be pinned (increased ref count) by > shmem_read_mapping_page_gfp(). When a lot of pages are pinned, vmscan > wastes a lot of time scanning these pinned pages. In some extreme case, > all pages in the inactive anon lru are pinned, and only the inactive > anon lru is scanned due to inactive_ratio, the system cannot swap and > invokes the oom-killer. Mark these pinned pages as unevictable to speed > up vmscan. > > Add check_move_lru_page() to move page to appropriate lru list. > > This patch was inspired by Chris Wilson's change [1]. > > [1]: https://patchwork.kernel.org/patch/9768741/ > > Cc: Chris Wilson <chris@xxxxxxxxxxxxxxxxxx> > Cc: Michal Hocko <mhocko@xxxxxxxx> > Cc: Joonas Lahtinen <joonas.lahtinen@xxxxxxxxxxxxxxx> > Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx> > Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> > Cc: Dave Hansen <dave.hansen@xxxxxxxxx> > Signed-off-by: Kuo-Hsin Yang <vovoy@xxxxxxxxxxxx> > --- > The previous mapping_set_unevictable patch is worse on gem_syslatency > because it defers to vmscan to move these pages to the unevictable list > and the test measures latency to allocate 2MiB pages. This performance > impact can be solved by explicit moving pages to the unevictable list in > the i915 function. > > Chris, can you help to run the "igt/benchmarks/gem_syslatency -t 120 -b -m" > test with this patch on your testing machine? I tried to run the test on > a Celeron N4000, 4GB Ram machine. The mean value with this patch is > similar to that with the mlock patch. Will do. As you are confident, I'll try a few different machines. :) -Chris _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx