On 06/28/2012 08:55 AM, Andrea Arcangeli wrote:
On 64bit archs, 20 bytes are used for async memory migration (specific to the knuma_migrated per-node threads), and 4 bytes are used for the thread NUMA false sharing detection logic. This is a bad implementation due lack of time to do a proper one.
It is not ideal, no. If you document what everything does, maybe somebody else will understand the code well enough to help fix it.
--- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -136,6 +136,32 @@ struct page { struct page *first_page; /* Compound tail pages */ }; +#ifdef CONFIG_AUTONUMA + /* + * FIXME: move to pgdat section along with the memcg and allocate + * at runtime only in presence of a numa system. + */
Once you fix it, could you fold the fix into this patch?
+ /* + * To modify autonuma_last_nid lockless the architecture, + * needs SMP atomic granularity< sizeof(long), not all archs + * have that, notably some ancient alpha (but none of those + * should run in NUMA systems). Archs without that requires + * autonuma_last_nid to be a long. + */ +#if BITS_PER_LONG> 32 + int autonuma_migrate_nid; + int autonuma_last_nid; +#else +#if MAX_NUMNODES>= 32768 +#error "too many nodes" +#endif + /* FIXME: remember to check the updates are atomic */ + short autonuma_migrate_nid; + short autonuma_last_nid; +#endif + struct list_head autonuma_migrate_node; +#endif
Please document what these fields mean. -- All rights reversed -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>