On Fri, Jan 17, 2020 at 08:34:58PM -0800, Paul E. McKenney wrote: > On Fri, Jan 17, 2020 at 09:34:34PM -0500, Joel Fernandes wrote: > > On Fri, Jan 17, 2020 at 03:17:56PM -0800, Paul E. McKenney wrote: > > [...] > > > But rcutorture already has tests for RCU priority boosting. Or are those > > > failing in some way? > > > > Yes there are tests, but I thought of just a simple experiment to study this. > > Purely since it is existing RCU kernel code that I'd like to understand. And > > me/Daniel are also looking into possibly using run-time / trace-based > > verification some of these behaviors. > > The functionality of rcu_state.cbovld should make that more entertaining. > > But I would guess that the initial model would ignore memory footprint > and just model RCU priority boosting as kicking in a fixed time after > the beginning of the grace period. > > Or do you guys have something else in mind? Yes, that is the idea. And then turn the model into a unit test (for the measurement). Though I am also personally trying to convince myself that a unit test based on a model is better than the test in the kernel module I just posted. We're just looking at applying Daniel's modeling work to verification of behaviors like these. A poor-man's alternative of a model-based test is just making sure that synchronize_rcu() finishes in a bounded period of time (basically test by observation than test by model) similar to what my kernel module did. But I guess a model based test would be more accurate and more strict about what is considered a pass vs fail. I was also studying SRCU and could not find tracepoints so I am thinking of adding some to aid the study. I know for Tree-SRCU you are using timers and workqueues but the concept hasn't largely changed since [1] was written right? [1] https://lwn.net/Articles/202847/ thanks! - Joel > Thanx, Paul > > PS. Steve, yes, I do well remember our earlier discussions about readers > inheriting priority from the highest-priority synchronize_rcu(). ;-)