Re: Memory model release/acquire mode interactions of relaxed atomic operations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 04/05/17 14:49, Jonathan Wakely wrote:
> On 4 May 2017 at 13:11, Toebs Douglass wrote:
>> This being true if and only if the atomic load/store functions are used,
>> right?  and I suspect those functions are only issuing memory barriers?
>> they do not use atomic operations themselves.  If this is so, then all
>> this is true, but there is within this absolutely no guarantee than any
>> store on any core will ever *actually be seen* by any other core.  If
>> they *are* seen, *they will be in the correct order*, but there is no
>> guarantee they *will* be seen.
> 
> The C++ standard says:
> 
>  An implementation should ensure that the last value (in modification
> order) assigned by an atomic or
> synchronization operation will become visible to all other threads in
> a finite period of time.
> 
> "Should" in ISO-speak is strong encouragement, but not a guarantee.
> However, it is expected that on all but the most esoteric hardware
> platforms it will be true.
> 
> I don't see equivalent wording in the C11 standard though.

As far as I know (which is *not* very far *at all*) no current processor
offers a mechanism such that it can be told it needs to flush its store
buffers - the only way in which I think this can be made to occur is by
issuing an atomic store, since this forces a store to memory, which in
turn will require earlier stores barriers to be honoured, and so earlier
stores to complete.

This would mean then the compiler would need in theory to keep track of
how long it has been since a store had been issued, to check that it had
not yet occurred, and then issue an atomic write.  I am certain this
does not occur.

In other words, this behaviour - as is allowed by the spec being a
SHOULD - is left to the processor, and none of them offer guarantees.
Intel say something like "in a reasonable time".

I have on the back-burner a little bit of test code to see if I can
break hazard pointers on this issue.  The normal hazard pointer
implementation is memory barriers only, and *depends upon* on
"reasonable time" for correct execution.





[Index of Archives]     [Linux C Programming]     [Linux Kernel]     [eCos]     [Fedora Development]     [Fedora Announce]     [Autoconf]     [The DWARVES Debugging Tools]     [Yosemite Campsites]     [Yosemite News]     [Linux GCC]

  Powered by Linux