On 2017/2/17 下午3:56, Coly Li wrote: > On 2017/2/16 下午3:04, NeilBrown wrote: >> On Thu, Feb 16 2017, colyli@xxxxxxx wrote: >> >>> @@ -2393,6 +2455,11 @@ static void handle_write_finished(struct >>> r1conf *conf, struct r1bio *r1_bio) idx = >>> sector_to_idx(r1_bio->sector); conf->nr_queued[idx]++; >>> spin_unlock_irq(&conf->device_lock); + /* + * In case >>> freeze_array() is waiting for condition + * >>> get_unqueued_pending() == extra to be true. + */ + >>> wake_up(&conf->wait_barrier); >>> md_wakeup_thread(conf->mddev->thread); } else { if >>> (test_bit(R1BIO_WriteError, &r1_bio->state)) @@ -2529,9 +2596,7 >>> @@ static void raid1d(struct md_thread *thread) retry_list); >>> list_del(&r1_bio->retry_list); idx = >>> sector_to_idx(r1_bio->sector); - >>> spin_lock_irqsave(&conf->device_lock, flags); >>> conf->nr_queued[idx]--; - >>> spin_unlock_irqrestore(&conf->device_lock, flags); >> >> Why do you think it is safe to decrement nr_queued without holding >> the lock? Surely this could race with handle_write_finished, and an >> update could be lost. > > conf->nr_queued[idx] is an integer and aligned to 4 bytes address, so > conf->nr_queued[idx]++ is same to atomic_inc(&conf->nr_queued[idx]), > it is atomic operation. And there is no ordering requirement, so I > don't need memory barrier here. This is why I remove spin lock, and > change it from atomic_t back to int. > > > IMHO, the problematic location is not here, but in freeze_array(). Now > the code assume array is froze when "get_unqueued_pending(conf) == > extra" gets true. I think it is incorrect. Hmm, I am wrong here. conf->nr_queued[idx]++ is not atomic. Yes, it should be atomic_t, I will fix it in next version. Coly Li -- To unsubscribe from this list: send the line "unsubscribe linux-raid" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html