Re: [PATCH v2] CodeSamples/count: change full memory barrier to partial one

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Akira,

Thanks for your review! 

I’ll send a patch only concerning count_stat_eventual.c later.

Thanks,
Alan

> 2023年4月5日 下午10:08,Akira Yokosawa <akiyks@xxxxxxxxx> 写道:
> 
> Hi Alan,
> 
> On Tue,  4 Apr 2023 12:47:53 -0400, Alan Huang wrote:
>> The original memory barriers in count_stat_eventual.c ensure
>> writing to global_count happens before writing to stopflag and
>> reading from stopflag happens before later reading from global_count.
>> 
>> Thus, smp_load_acquire and smp_store_release will suffice.
> 
> So far, looks good.
> But those smp_mb()s are mentioned in Section 5.2.4.
> Please see inline comments below:
> 
>> 
>> In count_lim_sig.c, there is only one ordering required, that is
>> writing to counter happens before setting theft to THEFT_READY in
>> add_count/sub_count's fast path.
> 
> I'm kind of not 100% sure of your changes in count_lim_sig.c.
> I'll comment in another mail.  That might take a while.
> 
>>                                 Therefore, partial memory barrier
>> will suffice.
>> 
>> Signed-off-by: Alan Huang <mmpgouride@xxxxxxxxx>
>> ---
>> CodeSamples/count/count_lim_sig.c       | 10 +++-------
>> CodeSamples/count/count_stat_eventual.c |  6 ++----
>> count/count.tex                         |  9 ++++-----
>> 3 files changed, 9 insertions(+), 16 deletions(-)
>> 
> 
> [...]
> 
>> diff --git a/CodeSamples/count/count_stat_eventual.c b/CodeSamples/count/count_stat_eventual.c
>> index 967644de..7157ee0e 100644
>> --- a/CodeSamples/count/count_stat_eventual.c
>> +++ b/CodeSamples/count/count_stat_eventual.c
>> @@ -51,8 +51,7 @@ void *eventual(void *arg)				//\lnlbl{eventual:b}
>> 		WRITE_ONCE(global_count, sum);
>> 		poll(NULL, 0, 1);
>> 		if (READ_ONCE(stopflag)) {
>> -			smp_mb();
>> -			WRITE_ONCE(stopflag, stopflag + 1);
>> +			smp_store_release(&stopflag, stopflag + 1);
>> 		}
>> 	}
>> 	return NULL;
>> @@ -73,9 +72,8 @@ void count_init(void)					//\lnlbl{init:b}
>> void count_cleanup(void)				//\lnlbl{cleanup:b}
>> {
>> 	WRITE_ONCE(stopflag, 1);
>> -	while (READ_ONCE(stopflag) < 3)
>> +	while (smp_load_acquire(&stopflag) < 3)
>> 		poll(NULL, 0, 1);
>> -	smp_mb();
>> }							//\lnlbl{cleanup:e}
>> //\end{snippet}
> 
> So, you'd also need to update a paragraph in Section 5.2.4,
> which currently reads:
> 
>    The count_cleanup() function on lines 48--54 coordinates termination.
>    The calls to smp_mb() here and in eventual() ensure that all updates to
>    global_count are visible to code following the call to count_cleanup().
> 
> And I think splitting the changes in count_stat_eventual.c and
> count_lim_sig.c into individual patches would ease the review on
> our side.
> 
> So, can you send a patch concerning count_stat_eventual.c
> first ?
> 
> I'll comment on count_lim_sig.c later, that is, if Paul won't beat
> me to it.
> 
>        Thanks, Akira
> 
> 





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux