On Thu 18-10-18 07:56:45, Chris Wilson wrote: > Quoting Chris Wilson (2018-10-16 19:31:06) > > Fwiw, the shmem_unlock_mapping() call feels quite expensive, almost > > nullifying the advantage gained from not walking the lists in reclaim. > > I'll have better numbers in a couple of days. > > Using a test ("igt/benchmarks/gem_syslatency -t 120 -b -m" on kbl) > consisting of cycletest with a background load of trying to allocate + > populate 2MiB (to hit thp) while catting all files to /dev/null, the > result of using mapping_set_unevictable is mixed. I haven't really read through your report completely yet but I wanted to point out that the above test scenario is unlikely show the real effect of the LRU scanning overhead because shmem pages do live on the anonymous LRU list. With a plenty of file page cache available we do not even scan anonymous LRU lists. You would have to generate a swapout workload to test this properly. On the other hand if mapping_set_unevictable has really a measurably bad performance impact then this is probably not worth much because most workloads are swap modest. -- Michal Hocko SUSE Labs _______________________________________________ Intel-gfx mailing list Intel-gfx@xxxxxxxxxxxxxxxxxxxxx https://lists.freedesktop.org/mailman/listinfo/intel-gfx