On 10/11/2012 08:28 AM, Mel Gorman wrote:
+ /* link for knuma_scand's list of mm structures to scan */
+ struct list_head mm_node;
+ /* Pointer to associated mm structure */
+ struct mm_struct *mm;
+
+ /*
+ * Zeroed from here during allocation, check
+ * mm_autonuma_reset() if you alter the below.
+ */
+
+ /*
+ * Pass counter for this mm. This exist only to be able to
+ * tell when it's time to apply the exponential backoff on the
+ * task_autonuma statistics.
+ */
+ unsigned long mm_numa_fault_pass;
+ /* Total number of pages that will trigger NUMA faults for this mm */
+ unsigned long mm_numa_fault_tot;
+ /* Number of pages that will trigger NUMA faults for each [nid] */
+ unsigned long mm_numa_fault[0];
+ /* do not add more variables here, the above array size is dynamic */
+};
How cache hot is this structure? nodes are sharing counters in the same
cache lines so if updates are frequent this will bounce like a mad yoke.
Profiles will tell for sure but it's possible that some sort of per-cpu
hilarity will be necessary here in the future.
These statistics are updated at page fault time, I
believe while holding the page table lock.
In other words, they are in code paths where updating
the stats should not cause issues.
+/*
+ * Per-task (thread) structure that contains the NUMA memory placement
+ * statistics generated by the knuma scan daemon. This structure is
+ * dynamically allocated only if AutoNUMA is possible on this
+ * system. They are linked togehter in a list headed within the
+ * knumad_scan structure.
+ */
+struct task_autonuma {
+ unsigned long task_numa_fault[0];
+ /* do not add more variables here, the above array size is dynamic */
+};
+
Same question about cache hotness.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>