On Tue, Nov 03, 2020 at 03:47:14PM +0100, Frederic Weisbecker wrote: > On Tue, Nov 03, 2020 at 09:25:59AM -0500, Joel Fernandes (Google) wrote: > > With earlier patches, the negative counting of the unsegmented list > > cannot be used to adjust the segmented one. To fix this, sample the > > unsegmented length in advance, and use it after CB execution to adjust > > the segmented list's length. > > > > Reviewed-by: Frederic Weisbecker <frederic@xxxxxxxxxx> > > Suggested-by: Frederic Weisbecker <frederic@xxxxxxxxxx> > > Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx> > > This breaks bisection, you need to either fix up the previous patch > by adding this diff inside or better yet: expand what you did > in "rcu/tree: Make rcu_do_batch count how many callbacks were executed" > to also handle srcu before introducing the segcb count. Since doing the latter is a lot more tedious and I want to get reviewing other's RCU patches today :) , I just squashed the suggestion into the counters patch to fix bissection: https://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git/commit/?h=rcu/segcb-counts&id=595e3a65eeef109cb8fcbfcc114fd3ea2064b873 Hope that's Ok. Also, so that I don't have to resend everything, here is the final branch if Paul wants to take it: git://git.kernel.org/pub/scm/linux/kernel/git/jfern/linux.git (branch rcu/segcb-counts) Thank you for your time, Frederick! - Joel