On Fri, Sep 8, 2023 at 7:41 AM Frederic Weisbecker <frederic@xxxxxxxxxx> wrote: > > On Fri, Sep 08, 2023 at 01:27:06AM -0700, Paul E. McKenney wrote: > > On Thu, Sep 07, 2023 at 08:51:43PM -0400, Joel Fernandes wrote: > > > On Thu, Sep 7, 2023 at 4:03 PM Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote: > > > > > > > > > > > > > > > > > On Sep 7, 2023, at 12:23 PM, Paul E. McKenney <paulmck@xxxxxxxxxx> wrote: > > > > > > > > > > On Thu, Sep 07, 2023 at 09:17:15AM -0400, Joel Fernandes wrote: > > > > >> Hi, > > > > >> Just started seeing this on 6.5 stable. It is new and first occurrence: > > > > >> > > > > >> TREE04 no success message, 234 successful version messages > > > > >> [033mWARNING: [mTREE04 GP HANG at 14 torture stat 2 > > > > >> [ 38.371120] ??? Writer stall state RTWS_COND_SYNC_FULL(10) g1253 > > > > >> f0x0 ->state 0x2 cpu 6 > > > > >> [ 38.388342] Call Trace: > > > > >> [ 53.741039] ??? Writer stall state RTWS_COND_SYNC_FULL(10) g3637 > > > > >> f0x2 ->state 0x2 cpu 6 > > > > >> [ 69.093462] ??? Writer stall state RTWS_COND_SYNC_FULL(10) g5501 > > > > >> f0x0 ->state 0x2 cpu 6 > > > > >> [ 84.450028] ??? Writer stall state RTWS_COND_SYNC_FULL(10) g10505 > > > > >> f0x0 ->state 0x2 cpu 6 > > > > >> [ 99.815871] ??? Writer stall state RTWS_COND_SYNC_FULL(10) g13781 > > > > >> f0x0 ->state 0x2 cpu 6 > > > > >> [ 115.166476] ??? Writer stall state RTWS_COND_SYNC_FULL(10) g16544 > > > > >> f0x0 ->state 0x2 cpu 6 > > > > >> [ 130.550116] ??? Writer stall state RTWS_COND_SYNC_FULL(10) g18941 > > > > >> f0x0 ->state 0x2 cpu 6 > > > > >> [..] > > > > >> > > > > >> All logs: > > > > >> http://box.joelfernandes.org:9080/job/rcutorture_stable/job/linux-6.5.y/17/artifact/tools/testing/selftests/rcutorture/res/2023.09.07-04.10.25/TREE04/ > > > > > > > > > > Huh. Does this happen for you in v6.5 mainline? > > > > > > > > > > Both the code under test (full-state polled grace periods) and the > > > > > rcutorture test code are fairly new, so there is some reason for general > > > > > suspicion. ;-) > > > > > > > > Ah. I never saw it on either 6.5 mainline or stable till today. Even on stable > > > > I only ever saw it this once. On mainline I have not seen it yet but I do test > > > > stable much more since I have been on stable maintenance duty ;-). > > > > > > I did a couple of long runs and I am not able to reproduce it anymore. :-/ > > > > I know that feeling! > > Same here, this is after all the reason why we keep the tick dependency within > the hotplug process without really knowing why :o) Heh. I have been running into another intermittent one as well which is the boost failure and that happens once in 10-15 runs or so. I was thinking of running the following configuration on an automated regular basis to at least provide a better clue on the lucky run that catches an issue. But then the issue is it would change timing enough to maybe hide bugs. I could also make it submit logs automatically to the list on such occurrences, but one step at a time and all that. I do need to add (hopefully less noisy) tick/timer related trace events. # Define the bootargs array bootargs=( "ftrace_dump_on_oops" "panic_on_warn=1" "sysctl.kernel.panic_on_rcu_stall=1" "sysctl.kernel.max_rcu_stall_to_panic=1" "trace_buf_size=10K" "traceoff_on_warning=1" "panic_print=0x1f" # To dump held locks, mem and other info. ) # Define the trace events array passed to bootargs. trace_events=( "sched:sched_switch" "sched:sched_waking" "rcu:rcu_callback" "rcu:rcu_fqs" "rcu:rcu_quiescent_state_report" "rcu:rcu_grace_period" ) Thanks.