Hello,
I revised the patch based on comments from Jon.
Attached patch should work on the version of mark_region which may block.
Thanks,
Jun'ichi Nomura wrote:
Jonathan E Brassow wrote:
> could this be solved by doing your patch in rh_dec and just moving the
> atomic_inc in rh_inc? The reason I ask is that the mark_region log
> call can block.
No.
Unless they are serialized, it's possible that rh_inc() will see the
state RH_DIRTY, while rh_dec change it to RH_CLEAN.
As a result, the region which has I/O in-flight may be freed.
Is it reasonable to call mark_region() unconditionally?
Then we can call it outside of the lock.
>> CPU0 CPU1
>>
>> -----------------------------------------------------------------------
>> -------
>> rh_dec()
>> if (atomic_dec_and_test(pending))
>> <the region is still marked dirty>
if (atomic_read(pending)==0)
>> rh_inc()
>> atomic_inc(pending)
>> if the region is clean
>> mark the region dirty
>> and remove from clean
list
else do nothing
>> mark the region clean
>> and move to clean list
diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
--- a/drivers/md/dm-raid1.c
+++ b/drivers/md/dm-raid1.c
@@ -375,6 +375,9 @@ static void rh_inc(struct region_hash *r
read_lock(&rh->hash_lock);
reg = __rh_find(rh, region);
+ spin_lock_irq(&rh->region_lock);
+ atomic_inc(®->pending);
+ spin_unlock_irq(&rh->region_lock);
if (reg->state == RH_CLEAN) {
rh->log->type->mark_region(rh->log, reg->key);
@@ -384,7 +387,6 @@ static void rh_inc(struct region_hash *r
spin_unlock_irq(&rh->region_lock);
}
- atomic_inc(®->pending);
read_unlock(&rh->hash_lock);
}
@@ -408,6 +410,10 @@ static void rh_dec(struct region_hash *r
if (atomic_dec_and_test(®->pending)) {
spin_lock_irqsave(&rh->region_lock, flags);
+ if (atomic_read(®->pending)) { /* check race */
+ spin_unlock_irqrestore(&rh->region_lock, flags);
+ return;
+ }
if (reg->state == RH_RECOVERING) {
list_add_tail(®->list, &rh->quiesced_regions);
} else {