On Wed, Nov 16, 2016 at 10:36:32PM +0800, Coly Li wrote: > 在 2016/11/16 下午10:19, Coly Li 写道: > [snip] > > --- > > drivers/md/raid1.c | 9 +++++---- > > 1 file changed, 5 insertions(+), 4 deletions(-) > > > > Index: linux-raid1/drivers/md/raid1.c > > =================================================================== > > --- linux-raid1.orig/drivers/md/raid1.c > > +++ linux-raid1/drivers/md/raid1.c > > @@ -2387,17 +2387,17 @@ static void raid1d(struct md_thread *thr > [snip] > > while (!list_empty(&tmp)) { > > r1_bio = list_first_entry(&tmp, struct r1bio, > > retry_list); > > list_del(&r1_bio->retry_list); > > + spin_lock_irqsave(&conf->device_lock, flags); > > + conf->nr_queued--; > > + spin_unlock_irqrestore(&conf->device_lock, flags); > [snip] > > Now I work on another 2 patches for a simpler I/O barrier, and a > lockless I/O submit on raid1, where conf->nr_queued will be in atomic_t. > So spin lock expense will not exist any more. Just FYI. I'd like to hold this patch till you post the simpler I/O barrier, as the patch itself currently doesn't make the process faster (lock/unlock is much heavier than the loop). Thanks, Shaohua -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html