在 2022/04/08 22:31, Bart Van Assche 写道:
On 4/8/22 00:39, Yu Kuai wrote:
Always wake up 'wake_batch' threads will intensify competition and
split io won't be issued continuously. Now that how many tags is required
is recorded for huge io, it's safe to wake up baed on required tags.
Signed-off-by: Yu Kuai <yukuai3@xxxxxxxxxx>
---
lib/sbitmap.c | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/lib/sbitmap.c b/lib/sbitmap.c
index 8d01e02ea4b1..eac9fa5c2b4d 100644
--- a/lib/sbitmap.c
+++ b/lib/sbitmap.c
@@ -614,6 +614,26 @@ static inline void sbq_update_preemption(struct
sbitmap_queue *sbq,
WRITE_ONCE(sbq->force_tag_preemption, force);
}
+static unsigned int get_wake_nr(struct sbq_wait_state *ws, unsigned
int nr_tags)
Consider renaming "get_wake_nr()" into "nr_to_wake_up()".
+{
+ struct sbq_wait *wait;
+ struct wait_queue_entry *entry;
+ unsigned int nr = 1;
+
+ spin_lock_irq(&ws->wait.lock);
+ list_for_each_entry(entry, &ws->wait.head, entry) {
+ wait = container_of(entry, struct sbq_wait, wait);
+ if (nr_tags <= wait->nr_tags)
+ break;
+
+ nr++;
+ nr_tags -= wait->nr_tags;
+ }
+ spin_unlock_irq(&ws->wait.lock);
+
+ return nr;
+}
+
static bool __sbq_wake_up(struct sbitmap_queue *sbq)
{
struct sbq_wait_state *ws;
@@ -648,7 +668,7 @@ static bool __sbq_wake_up(struct sbitmap_queue *sbq)
smp_mb__before_atomic();
atomic_set(&ws->wait_cnt, wake_batch);
sbq_update_preemption(sbq, wake_batch);
- wake_up_nr(&ws->wait, wake_batch);
+ wake_up_nr(&ws->wait, get_wake_nr(ws, wake_batch));
return true;
}
ws->wait.lock is unlocked after the number of threads to wake up has
been computed and is locked again by wake_up_nr(). The ws->wait.head
list may be modified after get_wake_nr() returns and before wake_up_nr()
is called. Isn't that a race condition?
Hi,
That is a race condition, I was hoping that the problem patch 5 fixed
can cover this.
Thanks,
Kuai
Thanks,
Bart.
.