On Wed, Jul 09, 2014 at 09:13:04AM +0100, Mel Gorman wrote: > The arrangement of struct zone has changed over time and now it has reached the > point where there is some inappropriate sharing going on. On x86-64 for example > > o The zone->node field is shared with the zone lock and zone->node is accessed > frequently from the page allocator due to the fair zone allocation policy. > o span_seqlock is almost never used by shares a line with free_area > o Some zone statistics share a cache line with the LRU lock so reclaim-intensive > and allocator-intensive workloads can bounce the cache line on a stat update > > This patch rearranges struct zone to put read-only and read-mostly fields > together and then splits the page allocator intensive fields, the zone > statistics and the page reclaim intensive fields into their own cache > lines. Note that the type of lowmem_reserve changes due to the watermark > calculations being signed and avoiding a signed/unsigned conversion there. > > On the test configuration I used the overall size of struct zone shrunk > by one cache line. On smaller machines, this is not likely to be noticable. > However, on a 4-node NUMA machine running tiobench the system CPU overhead > is reduced by this patch. > > 3.16.0-rc3 3.16.0-rc3 > vanillarearrange-v5r9 > User 746.94 759.78 > System 65336.22 58350.98 > Elapsed 27553.52 27282.02 > > Signed-off-by: Mel Gorman <mgorman@xxxxxxx> Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx> -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>