On 07.09.19 19:25, Alexander Duyck wrote: > From: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx> > > Change the logic used to generate randomness in the suffle path so that we > can avoid cache line bouncing. The previous logic was sharing the offset > and entropy word between all CPUs. As such this can result in cache line > bouncing and will ultimately hurt performance when enabled. So, usually we perform such changes if there is real evidence. Do you have any such performance numbers to back your claims? > > To resolve this I have moved to a per-cpu logic for maintaining a unsigned > long containing some amount of bits, and an offset value for which bit we > can use for entropy with each call. > > Reviewed-by: Dan Williams <dan.j.williams@xxxxxxxxx> > Signed-off-by: Alexander Duyck <alexander.h.duyck@xxxxxxxxxxxxxxx> > --- > mm/shuffle.c | 33 +++++++++++++++++++++++---------- > 1 file changed, 23 insertions(+), 10 deletions(-) > > diff --git a/mm/shuffle.c b/mm/shuffle.c > index 3ce12481b1dc..9ba542ecf335 100644 > --- a/mm/shuffle.c > +++ b/mm/shuffle.c > @@ -183,25 +183,38 @@ void __meminit __shuffle_free_memory(pg_data_t *pgdat) > shuffle_zone(z); > } > > +struct batched_bit_entropy { > + unsigned long entropy_bool; > + int position; > +}; > + > +static DEFINE_PER_CPU(struct batched_bit_entropy, batched_entropy_bool); > + > void add_to_free_area_random(struct page *page, struct free_area *area, > int migratetype) > { > - static u64 rand; > - static u8 rand_bits; > + struct batched_bit_entropy *batch; > + unsigned long entropy; > + int position; > > /* > - * The lack of locking is deliberate. If 2 threads race to > - * update the rand state it just adds to the entropy. > + * We shouldn't need to disable IRQs as the only caller is > + * __free_one_page and it should only be called with the zone lock > + * held and either from IRQ context or with local IRQs disabled. > */ > - if (rand_bits == 0) { > - rand_bits = 64; > - rand = get_random_u64(); > + batch = raw_cpu_ptr(&batched_entropy_bool); > + position = batch->position; > + > + if (--position < 0) { > + batch->entropy_bool = get_random_long(); > + position = BITS_PER_LONG - 1; > } > > - if (rand & 1) > + batch->position = position; > + entropy = batch->entropy_bool; > + > + if (1ul & (entropy >> position)) > add_to_free_area(page, area, migratetype); > else > add_to_free_area_tail(page, area, migratetype); > - rand_bits--; > - rand >>= 1; > } > > -- Thanks, David / dhildenb