On Wed, Oct 25, 2023 at 05:13:27PM +0200, Frederic Weisbecker wrote: > Le Wed, Oct 25, 2023 at 04:09:13PM +0200, Uladzislau Rezki (Sony) a écrit : > > +/* > > + * Helper function for rcu_gp_cleanup(). > > + */ > > +static void rcu_sr_normal_gp_cleanup(void) > > +{ > > + struct llist_node *head, *tail, *pos; > > + int i = 0; > > + > > + tail = READ_ONCE(sr.wait_tail); > > + head = llist_del_all(&sr.wait); > > This could be llist_empty() first to do a quick > cheap check. And then __llist_del_all() here because > it appears nothing else than gp kthread can touch sr.wait. > No problem i can fix it. Initially i had a check first! > > + > > + llist_for_each_safe(pos, head, head) { > > Two times head intended here? There should be some > temporary storage in the middle. > Yes. It is intentially done. The head is updated, i.e. shifted to a next, because we directly process users from a GP. The number is limited to 5 all the rest is deferred. > > + rcu_sr_normal_complete(pos); > > + > > + if (++i == MAX_SR_WAKE_FROM_GP) { > > + /* If last, process it also. */ > > + if (head && !head->next) > > + continue; > > + break; > > + } > > + } > > + > > + if (head) { > > + /* Can be not empty. */ > > + llist_add_batch(head, tail, &sr.done); > > + queue_work(system_highpri_wq, &sr_normal_gp_cleanup); > > So you can have: > > * Queue to sr.curr is atomic fully ordered > * Check and move from sr.curr to sr.wait is atomic fully ordered > * Check from sr.wait can have a quick unatomic unordered > llist_empty() check. Then extract unatomic unordered as well. > * If too many, move atomic/ordered to sr.done. > > Am I missing something? > If too many move to done and kick the helper. The sr.wait can not be touched until the rcu_sr_normal_gp_cleanup() is completed, i.e.: <snip> GP-kthread(same and one task context): rcu_sr_normal_gp_cleanup(); wait for a grace period; rcu_sr_normal_gp_cleanup(); <snip> Am i missing your point? -- Uladzislau Rezki