On Fri, Apr 26, 2019 at 07:06:10AM -0400, Elad Lahav wrote: > Hello, > > Section 3.1.3 contains the following statement: > > "Fortunately, CPU designers have focused heavily on atomic operations, > so that as of early 2014 they have greatly reduced their overhead." > > My experience with very recent hardware is that the *relative* cost of > atomic operations has actually increased significantly. It seems that > hardware designers, in their attempt to optimize performance for > certain workloads, have produced hardware in which the "anomalous" > conditions (atomic operations, cache misses, barriers, exceptions) > incur much higher penalties than in the past. I assume that this is > primarily the result of more intensive speculation and prediction. Some of the early 2000s systems had -really- atomic operations, but I have not kept close track since 2014. How would you suggest that this be measured? Do you have access to a range of hardweare that would permit us to include something more definite and measurable? Thanx, Paul