Re: [PATCH] lightnvm: pblk: sync RB and RL states during GC

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/24/2018 04:08 PM, Igor Konopko wrote:
During sequential workloads we can met the case
when almost all the lines are fully written with data.
In that case rate limiter will significantly
reduce the max number of requests for user IOs.

Unfortunately in the case when round buffer is
flushed to drive and the entries are not yet
removed (which is ok, since there is still enough
free entries in round buffer for user IO) we hang on
user IO due to not enough entries in rate limiter.
The reason is that rate limiter user entries are
decreased after freeing the round buffer entries,
which does not happen if there is still plenty of
space in round buffer.

The goal of this patch is to force freeing round buffer
by calling pblk_rb_sync_l2p and thus making new free entries
in rate limiter, when there is no enough of them for user IO.

Signed-off-by: Igor Konopko <igor.j.konopko@xxxxxxxxx>
Signed-off-by: Marcin Dziegielewski <marcin.dziegielewski@xxxxxxxxx>
---
  drivers/lightnvm/pblk-init.c | 2 ++
  drivers/lightnvm/pblk-rb.c   | 7 +++----
  2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index 0f277744266b..e6aa7726f8ba 100644
--- a/drivers/lightnvm/pblk-init.c
+++ b/drivers/lightnvm/pblk-init.c
@@ -1149,7 +1149,9 @@ static void pblk_tear_down(struct pblk *pblk, bool graceful)
  		__pblk_pipeline_flush(pblk);
  	__pblk_pipeline_stop(pblk);
  	pblk_writer_stop(pblk);
+	spin_lock(&pblk->rwb.w_lock);
  	pblk_rb_sync_l2p(&pblk->rwb);
+	spin_unlock(&pblk->rwb.w_lock);
  	pblk_rl_free(&pblk->rl);
pr_debug("pblk: consistent tear down (graceful:%d)\n", graceful);
diff --git a/drivers/lightnvm/pblk-rb.c b/drivers/lightnvm/pblk-rb.c
index 1b74ec51a4ad..91824cd3e8d8 100644
--- a/drivers/lightnvm/pblk-rb.c
+++ b/drivers/lightnvm/pblk-rb.c
@@ -266,21 +266,18 @@ static int pblk_rb_update_l2p(struct pblk_rb *rb, unsigned int nr_entries,
   * Update the l2p entry for all sectors stored on the write buffer. This means
   * that all future lookups to the l2p table will point to a device address, not
   * to the cacheline in the write buffer.
+ * Caller must ensure that rb->w_lock is taken.
   */
  void pblk_rb_sync_l2p(struct pblk_rb *rb)
  {
  	unsigned int sync;
  	unsigned int to_update;
- spin_lock(&rb->w_lock);
-
  	/* Protect from reads and writes */
  	sync = smp_load_acquire(&rb->sync);
to_update = pblk_rb_ring_count(sync, rb->l2p_update, rb->nr_entries);
  	__pblk_rb_update_l2p(rb, to_update);
-
-	spin_unlock(&rb->w_lock);
  }
/*
@@ -462,6 +459,8 @@ int pblk_rb_may_write_user(struct pblk_rb *rb, struct bio *bio,
  	spin_lock(&rb->w_lock);
  	io_ret = pblk_rl_user_may_insert(&pblk->rl, nr_entries);
  	if (io_ret) {
+		/* Sync RB & L2P in order to update rate limiter values */
+		pblk_rb_sync_l2p(rb);
  		spin_unlock(&rb->w_lock);
  		return io_ret;
  	}


Thanks Igor. Note that I rearranged the description a bit to fit within 70 chars per line, and also changed the last paragraph to be more direct. Applied for 4.18.



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux