Hi,
在 2023/08/07 12:51, Corey Hickey 写道:
On 2023-08-06 19:46, Yu Kuai wrote:
can you test the following patch?
diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index 51a68fbc241c..a85ea19fcf14 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -1266,9 +1266,8 @@ static void r5l_log_flush_endio(struct bio *bio)
list_for_each_entry(io, &log->flushing_ios, log_sibling)
r5l_io_run_stripes(io);
list_splice_tail_init(&log->flushing_ios, &log->finished_ios);
- spin_unlock_irqrestore(&log->io_list_lock, flags);
-
bio_uninit(bio);
+ spin_unlock_irqrestore(&log->io_list_lock, flags);
}
/*
My patch utility didn't like it for some reason, but I applied the
changes manually to get what I think is the same thing. I'll paste the
diff here just in case.
--- drivers/md/raid5-cache.c.orig 2023-08-06 20:26:10.386665042 -0700
+++ drivers/md/raid5-cache.c 2023-08-06 20:31:33.290688590 -0700
@@ -1265,9 +1265,8 @@
list_for_each_entry(io, &log->flushing_ios, log_sibling)
r5l_io_run_stripes(io);
list_splice_tail_init(&log->flushing_ios, &log->finished_ios);
- spin_unlock_irqrestore(&log->io_list_lock, flags);
-
bio_uninit(bio);
+ spin_unlock_irqrestore(&log->io_list_lock, flags);
}
Yes, this is what I expected.
/*
With a new kernel including this change, I can now no longer reproduce
the problem; 12 successful runs seems pretty definitive given the
failure rate I was seeing before.
This was on a newly-recreated RAID-5, and I double-checked that I did
indeed re-enable write-back.
Thanks for the test, I'll send a patch with your tested-by tag soon.
Thank you for this! I wasn't expecting such a fast response, especially
on the weekend.
It's Monday for us, actually 😄
Thanks,
Kuai
-Corey
.