Re: [PATCH] CodeSamples/count: change full memory barrier to partial one

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 06, 2023 at 09:09:22AM +0900, Akira Yokosawa wrote:
> On Wed,  5 Apr 2023 12:56:41 -0400, Alan Huang wrote:
> > The original memory barriers in count_stat_eventual.c ensure
> > writing to global_count happens before writing to stopflag and
> > reading from stopflag happens before later reading from global_count.
> > 
> > Thus, smp_load_acquire and smp_store_release will suffice.
> > 
> > Signed-off-by: Alan Huang <mmpgouride@xxxxxxxxx>
> > ---
> >  CodeSamples/count/count_stat_eventual.c | 6 ++----
> >  count/count.tex                         | 6 +++---
> >  2 files changed, 5 insertions(+), 7 deletions(-)
> > 
> > diff --git a/CodeSamples/count/count_stat_eventual.c b/CodeSamples/count/count_stat_eventual.c
> > index 967644de..7157ee0e 100644
> > --- a/CodeSamples/count/count_stat_eventual.c
> > +++ b/CodeSamples/count/count_stat_eventual.c
> > @@ -51,8 +51,7 @@ void *eventual(void *arg)				//\lnlbl{eventual:b}
> >  		WRITE_ONCE(global_count, sum);
> >  		poll(NULL, 0, 1);
> >  		if (READ_ONCE(stopflag)) {
> > -			smp_mb();
> > -			WRITE_ONCE(stopflag, stopflag + 1);
> > +			smp_store_release(&stopflag, stopflag + 1);
> >  		}
> >  	}
> >  	return NULL;
> > @@ -73,9 +72,8 @@ void count_init(void)					//\lnlbl{init:b}
> >  void count_cleanup(void)				//\lnlbl{cleanup:b}
> >  {
> >  	WRITE_ONCE(stopflag, 1);
> > -	while (READ_ONCE(stopflag) < 3)
> > +	while (smp_load_acquire(&stopflag) < 3)
> >  		poll(NULL, 0, 1);
> > -	smp_mb();
> >  }							//\lnlbl{cleanup:e}
> >  //\end{snippet}
> >  
> > diff --git a/count/count.tex b/count/count.tex
> > index 80ada104..8ab67e2e 100644
> > --- a/count/count.tex
> > +++ b/count/count.tex
> > @@ -1013,9 +1013,9 @@ between passes.
> >  
> >  The \co{count_cleanup()} function on
> >  \clnrefrange{cleanup:b}{cleanup:e} coordinates termination.
> > -The calls to \co{smp_mb()} here and in \co{eventual()} ensure
> > -that all updates to \co{global_count} are visible to code following
> > -the call to \co{count_cleanup()}.
> > +The call to \co{smp_load_acquire()} here and the call to \co{smp_store_release()}
> > +in \co{eventual()} ensure that all updates to \co{global_count} are visible
> > +to code following the call to \co{count_cleanup()}.
> >  
> >  This approach gives extremely fast counter read-out while still
> >  supporting linear counter-update scalability.
> 
> Looks good to me!  Thanks.
> 
> Reviewed-by: Akira Yokosawa <akiyks@xxxxxxxxx>

Queued and pushed with the inevitable wordsmithing, thank you both!

							Thanx, Paul



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux