On Sun, Nov 22, 2020 at 03:35:46PM +0000, Pavel Begunkov wrote: > map->swap_lock protects map->cleared from concurrent modification, > however sbitmap_deferred_clear() is already atomically drains it, so > it's guaranteed to not loose bits on concurrent > sbitmap_deferred_clear(). > > A one threaded tag heavy test on top of nullbk showed ~1.5% t-put > increase, and 3% -> 1% cycle reduction of sbitmap_get() according to perf. > > Signed-off-by: Pavel Begunkov <asml.silence@xxxxxxxxx> > --- > include/linux/sbitmap.h | 5 ----- > lib/sbitmap.c | 14 +++----------- > 2 files changed, 3 insertions(+), 16 deletions(-) > > diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h > index e40d019c3d9d..74cc6384715e 100644 > --- a/include/linux/sbitmap.h > +++ b/include/linux/sbitmap.h > @@ -32,11 +32,6 @@ struct sbitmap_word { > * @cleared: word holding cleared bits > */ > unsigned long cleared ____cacheline_aligned_in_smp; > - > - /** > - * @swap_lock: Held while swapping word <-> cleared > - */ > - spinlock_t swap_lock; > } ____cacheline_aligned_in_smp; > > /** > diff --git a/lib/sbitmap.c b/lib/sbitmap.c > index c1c8a4e69325..4fd877048ba8 100644 > --- a/lib/sbitmap.c > +++ b/lib/sbitmap.c > @@ -15,13 +15,9 @@ > static inline bool sbitmap_deferred_clear(struct sbitmap_word *map) > { > unsigned long mask, val; > - bool ret = false; > - unsigned long flags; > > - spin_lock_irqsave(&map->swap_lock, flags); > - > - if (!map->cleared) > - goto out_unlock; > + if (!READ_ONCE(map->cleared)) > + return false; This way might break sbitmap_find_bit_in_index()/sbitmap_get_shallow(). Currently if sbitmap_deferred_clear() returns false, it means nothing can be allocated from this word. With this patch, even though 'false' is returned, free bits still might be available because another sbitmap_deferred_clear() can be run concurrently. Thanks, Ming