On 16.01.23 03:25, Raghavendra K T wrote:
During the Numa scanning make sure only relevant vmas of the tasks are scanned. Logic: 1) For the first two time allow unconditional scanning of vmas 2) Store recent 4 unique tasks (last 8bits of PIDs) accessed the vma. False negetives in case of collison should be fine here. 3) If more than 4 pids exist assume task indeed accessed vma to to avoid false negetives Co-developed-by: Bharata B Rao <bharata@xxxxxxx> (initial patch to store pid information) Suggested-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Signed-off-by: Bharata B Rao <bharata@xxxxxxx> Signed-off-by: Raghavendra K T <raghavendra.kt@xxxxxxx> --- include/linux/mm_types.h | 2 ++ kernel/sched/fair.c | 32 ++++++++++++++++++++++++++++++++ mm/memory.c | 21 +++++++++++++++++++++ 3 files changed, 55 insertions(+) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 500e536796ca..07feae37b8e6 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -506,6 +506,8 @@ struct vm_area_struct { struct mempolicy *vm_policy; /* NUMA policy for the VMA */ #endif struct vm_userfaultfd_ctx vm_userfaultfd_ctx; + unsigned int accessing_pids; + int next_pid_slot; } __randomize_layout;
What immediately jumps at me is the unconditional grow of a VMA by 8 bytes. A process with 64k mappings consumes 512 KiB more of memory, possibly completely unnecessarily.
This at least needs to be fenced by CONFIG_NUMA_BALANCING. -- Thanks, David / dhildenb