Current pageblock_flags is only 4 bits, so it has to share a char size in cmpxchg when get set, the false sharing cause perf drop. If we incrase the bits up to 8, false sharing would gone in cmpxchg. and the only cost is half char per pageblock, which is half char per 128MB on x86, 4 chars in 1 GB. Signed-off-by: Alex Shi <alex.shi@xxxxxxxxxxxxxxxxx> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx> Cc: linux-kernel@xxxxxxxxxxxxxxx Cc: linux-mm@xxxxxxxxx --- include/linux/pageblock-flags.h | 2 +- mm/page_alloc.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/pageblock-flags.h b/include/linux/pageblock-flags.h index d189441568eb..f785c9d6d68c 100644 --- a/include/linux/pageblock-flags.h +++ b/include/linux/pageblock-flags.h @@ -25,7 +25,7 @@ enum pageblock_bits { * Assume the bits will always align on a word. If this assumption * changes then get/set pageblock needs updating. */ - NR_PAGEBLOCK_BITS + NR_PAGEBLOCK_BITS = BITS_PER_BYTE }; #ifdef CONFIG_HUGETLB_PAGE diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 81e96d4d9c42..60342e764090 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -517,7 +517,7 @@ void set_pfnblock_flags_mask(struct page *page, unsigned long flags, unsigned long bitidx, byte_bitidx; unsigned char old_byte, byte; - BUILD_BUG_ON(NR_PAGEBLOCK_BITS != 4); + BUILD_BUG_ON(NR_PAGEBLOCK_BITS != BITS_PER_BYTE); BUILD_BUG_ON(MIGRATE_TYPES > (1 << PB_migratetype_bits)); bitmap = get_pageblock_bitmap(page, pfn); -- 1.8.3.1