On Tue, Jan 02, 2024 at 01:52:26PM +0100, Uladzislau Rezki wrote: > Hello, Paul! > > Sorry for late answer, it is because of holidays :) > > > > > > The problem is that, we are limited in number of "wait-heads" which we > > > > > add as a marker node for this/current grace period. If there are more clients > > > > > and there is no a wait-head available it means that a system, the deferred > > > > > kworker, is slow in processing callbacks, thus all wait-nodes are in use. > > > > > > > > > > That is why we need an extra grace period. Basically to repeat our try one > > > > > more time, i.e. it might be that a current grace period is not able to handle > > > > > users due to the fact that a system is doing really slow, but this is rather > > > > > a corner case and is not a problem. > > > > > > > > But in that case, the real issue is not the need for an extra grace > > > > period, but rather the need for the wakeup processing to happen, correct? > > > > Or am I missing something subtle here? > > > > > > > Basically, yes. If we had a spare dummy-node we could process the users > > > by the current GP(no need in extra). Why we may not have it - it is because > > > like you pointed: > > > > > > - wake-up issue, i.e. wake-up time + when we are on_cpu; > > > - slow list process. For example priority. The kworker is not > > > given enough CPU time to do the progress, thus "dummy-nodes" > > > are not released in time for reuse. > > > > > > Therefore, en extra GP is requested if there is a high flow of > > > synchronize_rcu() users and kworker is not able to do a progress > > > in time. > > > > > > For example 60K+ parallel synchronize_rcu() users will trigger it. > > > > OK, but what bad thing would happen if that was moved to precede the > > rcu_seq_start(&rcu_state.gp_seq)? That way, the requested grace period > > would be the same as the one that is just now starting. > > > > Something like this? > > > > start_new_poll = rcu_sr_normal_gp_init(); > > > > /* Record GP times before starting GP, hence rcu_seq_start(). */ > > rcu_seq_start(&rcu_state.gp_seq); > > ASSERT_EXCLUSIVE_WRITER(rcu_state.gp_seq); > > > I had a concern about the case when rcu_sr_normal_gp_init() handles what > we currently have, in terms of requests. Right after that there is/are > extra sync requests which invoke the start_poll_synchronize_rcu() but > since a GP has been requested before it will not request an extra one. So > "last" incoming users might not be processed. > > That is why i have placed the rcu_sr_normal_gp_init() after a gp_seq is > updated. > > I can miss something, so please comment. Apart of that we can move it > as you proposed. Couldn't that possibility be handled by a check in rcu_gp_cleanup()? Thanx, Paul