On 8/10/23 14:54, Joel Fernandes wrote:
On Thu, Aug 10, 2023 at 10:55:16AM -0700, Paul E. McKenney wrote:
On Wed, Aug 09, 2023 at 02:45:44PM -0700, Guenter Roeck wrote:
On 8/9/23 13:39, Joel Fernandes wrote:
On Wed, Aug 9, 2023 at 4:38 PM Guenter Roeck <linux@xxxxxxxxxxxx> wrote:
On 8/9/23 13:14, Joel Fernandes wrote:
On Wed, Aug 09, 2023 at 12:25:48PM -0700, Guenter Roeck wrote:
On Wed, Aug 09, 2023 at 02:35:59PM -0400, Joel Fernandes wrote:
On Wed, Aug 9, 2023 at 12:18 PM Guenter Roeck <linux@xxxxxxxxxxxx> wrote:
On 8/9/23 06:53, Joel Fernandes wrote:
On Wed, Aug 09, 2023 at 12:40:36PM +0200, Greg Kroah-Hartman wrote:
This is the start of the stable review cycle for the 5.15.126 release.
There are 92 patches in this series, all will be posted as a response
to this one. If anyone has any issues with these being applied, please
let me know.
Responses should be made by Fri, 11 Aug 2023 10:36:10 +0000.
Anything received after that time might be too late.
The whole patch series can be found in one patch at:
https://www.kernel.org/pub/linux/kernel/v5.x/stable-review/patch-5.15.126-rc1.gz
or in the git tree and branch at:
git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable-rc.git linux-5.15.y
and the diffstat can be found below.
Not necesscarily new with 5.15 stable but 3 of the 19 rcutorture scenarios
hang with this -rc: TREE04, TREE07, TASKS03.
5.15 has a known stop machine issue where it hangs after 1.5 hours with cpu
hotplug rcutorture testing. Me and tglx are continuing to debug this. The
issue does not show up on anything but 5.15 stable kernels and neither on
mainline.
Do you by any have a crash pattern that we could possibly use to find the crash
in ChromeOS crash logs ? No idea if that would help, but it could provide some
additional data points.
The pattern shows as a hard hang, the system is unresponsive and all CPUs
are stuck in stop_machine. Sometimes it recovers on its own from the
hang and then RCU immediately gives stall warnings. It takes 1.5 hour
to reproduce and sometimes never happens for several hours.
It appears related to CPU hotplug since gdb showed me most of the CPUs
are spinning in multi_cpu_stop() / stop machine after the hang.
Hmm, we do see lots of soft lockups with multi_cpu_stop() in the backtrace,
but not with v5.15.y but with v5.4.y. The actual hang is in stop_machine_yield().
Interesting. It looks similar as far as the stack dump in gdb goes, here are
the stacks I dumped with the hang I referred to:
https://paste.debian.net/1288308/
That link gives me "Entry not found".
Yeah that was weird. Here it is again: https://pastebin.com/raw/L3nv1kH2
I found a couple of crash reports from chromeos-5.10, one of them complaining
about RCU issues. I sent you links via IM. Nothing from 5.15 or later, though.
Is the crash showing the eternally refiring timer fixed by this commit?
53e87e3cdc15 ("timers/nohz: Last resort update jiffies on nohz_full IRQ entry")
Ah I was just replying, I have been seeing really good results after applying
the following 3 commits since yesterday:
53e87e3cdc15 ("timers/nohz: Last resort update jiffies on nohz_full IRQ entry")
5417ddc1cf1f ("timers/nohz: Switch to ONESHOT_STOPPED in the low-res handler when the tick is stopped")
a1ff03cd6fb9 ("tick: Detect and fix jiffies update stall")
Would those also apply to v5.10.y, or just 5.15.y ?
Thanks,
Guenter
5417ddc1cf1f also mentioned a "tick storm" which is exactly what I was
seeing.
I did a lengthy test and everything is looking good. I'll send these out to
the stable list.
thanks,
- Joel