Re: [PATCH v5 2/4] rcu: Reduce synchronize_rcu() latency

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Mar 04, 2024 at 12:55:47PM +0100, Frederic Weisbecker wrote:
> Le Wed, Feb 28, 2024 at 07:04:21PM +0100, Uladzislau Rezki a écrit :
> > On Tue, Feb 27, 2024 at 12:07:32AM +0100, Frederic Weisbecker wrote:
> > > On Tue, Feb 20, 2024 at 07:31:13PM +0100, Uladzislau Rezki (Sony) wrote:
> > > > +static void rcu_sr_normal_gp_cleanup_work(struct work_struct *work)
> > > > +{
> > > > +	struct llist_node *done, *rcu, *next, *head;
> > > > +
> > > > +	/*
> > > > +	 * This work execution can potentially execute
> > > > +	 * while a new done tail is being updated by
> > > > +	 * grace period kthread in rcu_sr_normal_gp_cleanup().
> > > > +	 * So, read and updates of done tail need to
> > > > +	 * follow acq-rel semantics.
> > > > +	 *
> > > > +	 * Given that wq semantics guarantees that a single work
> > > > +	 * cannot execute concurrently by multiple kworkers,
> > > > +	 * the done tail list manipulations are protected here.
> > > > +	 */
> > > > +	done = smp_load_acquire(&rcu_state.srs_done_tail);
> > > > +	if (!done)
> > > > +		return;
> > > > +
> > > > +	WARN_ON_ONCE(!rcu_sr_is_wait_head(done));
> > > > +	head = done->next;
> > > > +	done->next = NULL;
> > > 
> > > Can the following race happen?
> > > 
> > > CPU 0                                                   CPU 1
> > > -----                                                   -----
> > > 
> > > // wait_tail == HEAD1
> > > rcu_sr_normal_gp_cleanup() {
> > >     // has passed SR_MAX_USERS_WAKE_FROM_GP
> > >     wait_tail->next = next;
> > >     // done_tail = HEAD1
> > >     smp_store_release(&rcu_state.srs_done_tail, wait_tail);
> > >     queue_work() {
> > >         test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work)
> > >         __queue_work()
> > >     }
> > > }
> > > 
> > >                                                       set_work_pool_and_clear_pending()
> > >                                                       rcu_sr_normal_gp_cleanup_work() {
> > > // new GP, wait_tail == HEAD2
> > > rcu_sr_normal_gp_cleanup() {
> > >     // executes all completion, but stop at HEAD1
> > >     wait_tail->next = HEAD1;
> > >     // done_tail = HEAD2
> > >     smp_store_release(&rcu_state.srs_done_tail, wait_tail);
> > >     queue_work() {
> > >         test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work)
> > >         __queue_work()
> > >     }
> > > }
> > >                                                           // done = HEAD2
> > >                                                           done = smp_load_acquire(&rcu_state.srs_done_tail);
> > >                                                           // head = HEAD1
> > >                                                           head = done->next;
> > >                                                           done->next = NULL;
> > >                                                           llist_for_each_safe() {
> > >                                                               // completes all callbacks, release HEAD1
> > >                                                           }
> > >                                                       }
> > >                                                       // Process second queue
> > >                                                       set_work_pool_and_clear_pending()
> > >                                                       rcu_sr_normal_gp_cleanup_work() {
> > >                                                           // done = HEAD2
> > >                                                           done = smp_load_acquire(&rcu_state.srs_done_tail);
> > > 
> > > // new GP, wait_tail == HEAD3
> > > rcu_sr_normal_gp_cleanup() {
> > >     // Finds HEAD2 with ->next == NULL at the end
> > >     rcu_sr_put_wait_head(HEAD2)
> > >     ...
> > > 
> > > // A few more GPs later
> > > rcu_sr_normal_gp_init() {
> > >      HEAD2 = rcu_sr_get_wait_head();
> > >      llist_add(HEAD2, &rcu_state.srs_next);
> > >                                                           // head == rcu_state.srs_next
> > >                                                           head = done->next;
> > >                                                           done->next = NULL;
> > >                                                           llist_for_each_safe() {
> > >                                                               // EXECUTE CALLBACKS TOO EARLY!!!
> > >                                                           }
> > >                                                       }
> > Looks like that. To address this, we should not release the head in the GP
> > > kthread.
> 
> But then you have to unconditionally schedule the work, right? Otherwise the
> HEADs are not released. And that means dropping this patch (right now I don't
> have a better idea).
>
The easiest way is to drop the patch. To address it we can go with:

<snip>
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 31f3a61f9c38..9aa2cd46583e 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1661,16 +1661,8 @@ static void rcu_sr_normal_gp_cleanup(void)
 	 * wait-head is released if last. The worker is not kicked.
 	 */
 	llist_for_each_safe(rcu, next, wait_tail->next) {
-		if (rcu_sr_is_wait_head(rcu)) {
-			if (!rcu->next) {
-				rcu_sr_put_wait_head(rcu);
-				wait_tail->next = NULL;
-			} else {
-				wait_tail->next = rcu;
-			}
-
+		if (rcu_sr_is_wait_head(rcu))
 			break;
-		}
 
 		rcu_sr_normal_complete(rcu);
 		// It can be last, update a next on this step.
<snip>

i.e. the process of users from GP is still there. The work is triggered
to perform a final complete(if there are users) + releasing wait-heads
so we do not race anymore.

I am OK with both cases. Dropping the patch will make it more simple
for sure.

--
Uladzislau Rezki





[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux