On Thu, Feb 03, 2022 at 10:10:01PM +0100, Johann Klähn wrote: > Signed-off-by: Johann Klähn <johann@xxxxxxxxxx> Good catches, queued and pushed, thank you! Thanx, Paul > --- > memorder/memorder.tex | 16 ++++++++-------- > 1 file changed, 8 insertions(+), 8 deletions(-) > > diff --git a/memorder/memorder.tex b/memorder/memorder.tex > index 865eff95..9792099f 100644 > --- a/memorder/memorder.tex > +++ b/memorder/memorder.tex > @@ -2169,7 +2169,7 @@ from a particular store, then, courtesy of the finite speed of light and > the non-zero size of modern computing systems, the store absolutely has > to have executed at an earlier time than did the load. > This means that carefully constructed programs can rely on the > -passage of time itself as an memory-ordering operation. > +passage of time itself as a memory-ordering operation. > > \begin{figure} > \centering > @@ -2588,7 +2588,7 @@ if (p == &reserve_int) { > \item Although it is permissible to compute offsets from a > pointer, these offsets must not result in total cancellation. > For example, given a \co{char} pointer \co{cp}, > - \co{cp-(uintptr_t)cp)} will cancel and can allow the compiler > + \co{cp-(uintptr_t)cp} will cancel and can allow the compiler > to break your dependency chain. > On the other hand, canceling offset values with each other > is perfectly safe and legal. > @@ -2753,7 +2753,7 @@ void updater(void) \lnlbl[upd:b] > { > struct foo *p; > > - p = malloc(sizeo(*p)); \lnlbl[upd:alloc] > + p = malloc(sizeof(*p)); \lnlbl[upd:alloc] > BUG_ON(!p); \lnlbl[upd:bug] > p->a = 42; \lnlbl[upd:init:a] > p->b = 43; > @@ -2803,7 +2803,7 @@ bitterly on \clnref{bug} if none is available. > \Clnrefrange{init:a}{init:c} initialize the newly allocated structure, > and then \clnref{assign1} assigns the pointer to \co{gp1}. > \Clnref{upd:b,upd:c} then update two of the structure's fields, and does > -so \co{after} \clnref{assign1} has made those fields visible to readers. > +so \emph{after} \clnref{assign1} has made those fields visible to readers. > Please note that unsynchronized update of reader-visible fields > often constitutes a bug. > Although there are legitimate use cases doing just this, such use cases > @@ -3032,7 +3032,7 @@ if (q || 1 > 0) > > Because the first condition cannot fault and the second condition is > always true, the compiler can transform this example as following, > -defeating control dependency: > +defeating the control dependency: > > \begin{VerbatimN} > q = READ_ONCE(x); > @@ -3238,7 +3238,7 @@ These facts are what enables the lockless fastpaths in > \cref{lst:SMPdesign:Allocator-Cache Allocator Function,% > lst:SMPdesign:Allocator-Cache Free Function}, > respectively. > -However, this also why the developer is responsible for providing > +However, this is also why the developer is responsible for providing > appropriate ordering (for example, by using \co{smp_store_release()}) > when publishing a pointer to a newly allocated block of memory. > After all, in the CPU-local case, the allocator has not necessarily > @@ -3307,7 +3307,7 @@ As it turns out, quite a bit: > Is some other CPU or thread not holding that lock guaranteed to > see A and B in order? > \item As above, but with the lock reacquisition carried out by some > - other CPU or threads? > + other CPU or thread? > \item As above, but with the lock reacquisition being some other lock? > \item What ordering guarantees are provided by \co{spin_is_locked()}? > \end{enumerate} > @@ -3571,7 +3571,7 @@ One reason is modeling performance, as shown in > on > \cpageref{tab:formal:Locking: Modeling vs. Emulation Time (s)}. > Directly modeling locking in general is orders of magnitude faster > -than modeling even a trivial implementation. > +than emulating even a trivial implementation. > This should be no surprise, given the combinatorial explosion experienced > by present-day formal-verification tools with increases in the number of > memory accesses executed by the code being modeled. > -- > 2.34.1 >