Re: [PATCH v2] sbitmap: fix io hung due to race on sbitmap_word::cleared

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2024/6/6 11:12, Ming Lei wrote:
On Tue, Jun 04, 2024 at 02:12:22PM +0800, Yu Kuai wrote:
Hi,

在 2024/06/04 11:25, Ming Lei 写道:
On Tue, Jun 4, 2024 at 11:12 AM Yang Yang <yang.yang@xxxxxxxx> wrote:

Configuration for sbq:
    depth=64, wake_batch=6, shift=6, map_nr=1

1. There are 64 requests in progress:
    map->word = 0xFFFFFFFFFFFFFFFF
2. After all the 64 requests complete, and no more requests come:
    map->word = 0xFFFFFFFFFFFFFFFF, map->cleared = 0xFFFFFFFFFFFFFFFF
3. Now two tasks try to allocate requests:
    T1:                                       T2:
    __blk_mq_get_tag                          .
    __sbitmap_queue_get                       .
    sbitmap_get                               .
    sbitmap_find_bit                          .
    sbitmap_find_bit_in_word                  .
    __sbitmap_get_word  -> nr=-1              __blk_mq_get_tag
    sbitmap_deferred_clear                    __sbitmap_queue_get
    /* map->cleared=0xFFFFFFFFFFFFFFFF */     sbitmap_find_bit
      if (!READ_ONCE(map->cleared))           sbitmap_find_bit_in_word
        return false;                         __sbitmap_get_word -> nr=-1
      mask = xchg(&map->cleared, 0)           sbitmap_deferred_clear
      atomic_long_andnot()                    /* map->cleared=0 */
                                                if (!(map->cleared))
                                                  return false;
                                       /*
                                        * map->cleared is cleared by T1
                                        * T2 fail to acquire the tag
                                        */

4. T2 is the sole tag waiter. When T1 puts the tag, T2 cannot be woken
up due to the wake_batch being set at 6. If no more requests come, T1
will wait here indefinitely.

To fix this issue, simply revert commit 661d4f55a794 ("sbitmap:
remove swap_lock"), which causes this issue.

I'd suggest to add the following words in commit log:

Check on ->cleared and update on both ->cleared and ->word need to be
done atomically, and using spinlock could be the simplest solution.

Otherwise, the patch looks fine for me.

Maybe I'm noob, but I'm confused how can this fix the problem, looks
like the race condition doesn't change.

In sbitmap_find_bit_in_word:

1) __sbitmap_get_word read word;
2) sbitmap_deferred_clear clear cleared;
3) sbitmap_deferred_clear update word;

2) and 3) are done atomically while 1) can still concurrent with 3):

After 1) fails, sbitmap_deferred_clear() is called with spinlock,
then it is pretty easy to solve the race, such as, the following patch
against the revert patch.


diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index dee02a0266a6..c015ecd8e10e 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -63,13 +63,15 @@ static inline void update_alloc_hint_after_get(struct sbitmap *sb,
  static inline bool sbitmap_deferred_clear(struct sbitmap_word *map)
  {
  	unsigned long mask;
-	bool ret = false;
  	unsigned long flags;
+	bool ret;
spin_lock_irqsave(&map->swap_lock, flags); - if (!map->cleared)
+	if (!map->cleared) {
+		ret = !!map->word;

After atomic_long_andnot(mask, (atomic_long_t *)&map->word), map->word
may be 0 if all requests have completed, or not 0 if some requests are
still in flight. Therefore, using !!map->word to determine the
availability of free tags is inaccurate.

Thanks

  		goto out_unlock;
+	}
/*
  	 * First get a stable cleared mask, setting the old mask to 0.


Thanks,
Ming






[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux