Re: [REGRESSION] 6.7.1: md: raid5 hang and unresponsive system; successfully bisected

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Dan and Junxiao,

On Fri, Mar 1, 2024 at 3:12 PM Dan Moulding <dan@xxxxxxxx> wrote:
>
> > 5. Looks like the block layer or underlying(scsi/virtio-scsi) may have
> > some issue which leading to the io request from md layer stayed in a
> > partial complete statue. I can't see how this can be related with the
> > commit bed9e27baf52 ("Revert "md/raid5: Wait for MD_SB_CHANGE_PENDING in
> > raid5d"")
>
> There is no question that the above mentioned commit makes this
> problem appear. While it may be that ultimately the root cause lies
> outside the md/raid5 code (I'm not able to make such an assessment), I
> can tell you that change is what turned it into a runtime
> regression. Prior to that change, I cannot reproduce the problem. One
> of my RAID-5 arrays has been running on every kernel version since
> 4.8, without issue. Then kernel 6.7.1 the problem appeared within
> hours of running the new code and affected not just one but two
> different machines with RAID-5 arrays. With that change reverted, the
> problem is not reproducible. Then when I recently upgraded to 6.8-rc5
> I immediately hit the problem again (because it hadn't been reverted
> in the mainline yet). I'm now running 6.8.0-rc5 on one of my affected
> machines without issue after reverting that commit on top of it.
>
> It would seem a very unlikely coincidence that a careful bisection of
> all changes between 6.7.0 and 6.7.1 pointed at that commit as being
> the culprit, and that the change is to raid5.c, and that the hang
> happens in the raid5 kernel task, if there was no connection. :)
>
> > Are you able to reproduce using some regular scsi disk, would like to
> > rule out whether this is related with virtio-scsi?
>
> The first time I hit this problem was on two bare-metal machines, one
> server and one desktop with different hardware. I then set up this
> virtual machine just to reproduce the problem in a different
> environment (and to see if I could reproduce it with a distribution
> kernel since the other machines are running custom kernel
> configurations). So I'm able to reproduce it on:
>
> - A virtual machine
> - Bare metal machines
> - Custom kernel configuration with straight from stable and mainline code
> - Fedora 39 distribution kernel
>
> > And I see the kernel version is 6.8.0-rc5 from vmcore, is this the
> > official mainline v6.8-rc5 without any other patches?
>
> Yes this particular vmcore was from the Fedora 39 VM I used to
> reproduce the problem, but with the straight 6.8.0-rc5 mainline code
> (so that you wouldn't have to worry about any possible interference
> from distribution patches).

Thanks to both of your for looking into the issue and running various
tests.

I also tried again to reproduce the issue, but haven't got luck. While
I will continue try to repro the issue, I will also send the revert to 6.8
kernel. We have been fighting multiple issues recently, so we didn't
get much time into this issue. Fortunately, we have got proper fixes
for most of the other issues. We should have more time to look into
this.

Thanks again,
Song





[Index of Archives]     [Linux RAID Wiki]     [ATA RAID]     [Linux SCSI Target Infrastructure]     [Linux Block]     [Linux IDE]     [Linux SCSI]     [Linux Hams]     [Device Mapper]     [Device Mapper Cryptographics]     [Kernel]     [Linux Admin]     [Linux Net]     [GFS]     [RPM]     [git]     [Yosemite Forum]


  Powered by Linux