Fix several typos across the book. Most of them were found using codespell (https://github.com/lucasdemarchi/codespell). Signed-off-by: Tobias Klauser <tklauser@xxxxxxxxxx> --- SMPdesign/SMPdesign.tex | 2 +- SMPdesign/beyond.tex | 2 +- count/count.tex | 2 +- defer/rcuapi.tex | 2 +- howto/howto.tex | 2 +- locking/locking.tex | 4 ++-- owned/owned.tex | 2 +- rt/rt.tex | 4 ++-- 8 files changed, 10 insertions(+), 10 deletions(-) diff --git a/SMPdesign/SMPdesign.tex b/SMPdesign/SMPdesign.tex index 2662826d1540..81db94d1910b 100644 --- a/SMPdesign/SMPdesign.tex +++ b/SMPdesign/SMPdesign.tex @@ -737,7 +737,7 @@ or the parallel-fastpath approach discussed in the next section. size of the partitions. For example, if you split a 64-by-64 matrix multiply across 64 threads, each thread gets only 64 floating-point multiplies. - The cost of a floating-point multiply is miniscule compared to + The cost of a floating-point multiply is minuscule compared to the overhead of thread creation. Moral: If you have a parallel program with variable input, diff --git a/SMPdesign/beyond.tex b/SMPdesign/beyond.tex index a54f43ceead9..0f86303df3a2 100644 --- a/SMPdesign/beyond.tex +++ b/SMPdesign/beyond.tex @@ -21,7 +21,7 @@ But can we do better? To answer this question, let us examine the solution of labyrinths and mazes. Of course, labyrinths and mazes have been objects of fascination for -millenia~\cite{WikipediaLabyrinth}, +millennia~\cite{WikipediaLabyrinth}, so it should come as no surprise that they are generated and solved using computers, including biological computers~\cite{AndrewAdamatzky2011SlimeMold}, diff --git a/count/count.tex b/count/count.tex index 0eb95c3e4c0b..352a8880fd04 100644 --- a/count/count.tex +++ b/count/count.tex @@ -3692,7 +3692,7 @@ Of course, if you are using special-purpose hardware such as digital signal processors (DSPs), field-programmable gate arrays (FPGAs), or general-purpose graphical processing units (GPGPUs), you may need to pay close attention to the ``Interacting With Hardware'' bubble -thoughout the design process. +throughout the design process. For example, the structure of a GPGPU's hardware threads and memory connectivity might richly reward very careful partitioning and batching design decisions. diff --git a/defer/rcuapi.tex b/defer/rcuapi.tex index 7edc80741431..830019ec7c1a 100644 --- a/defer/rcuapi.tex +++ b/defer/rcuapi.tex @@ -93,7 +93,7 @@ Read side constraints & No bottom-half (BH) enabling & No blocking & Only preemption and lock acquisition & - No \co{synchronize_srcu()} wtih same \co{srcu_struct} \\ + No \co{synchronize_srcu()} with same \co{srcu_struct} \\ \hline Read side overhead & Preempt disable/enable (free on non-PREEMPT) & diff --git a/howto/howto.tex b/howto/howto.tex index 753a89f358c1..b48ce3f4f2aa 100644 --- a/howto/howto.tex +++ b/howto/howto.tex @@ -289,7 +289,7 @@ Fortunately, there are many alternatives available to you: \item If you want to work with Linux-kernel device drivers, then Corbet's, Rubini's, and Kroah-Hartman's ``Linux Device Drivers''~\cite{CorbetRubiniKroahHartman} - is indespensible, as is the Linux Weekly News web site + is indispensable, as is the Linux Weekly News web site (\url{http://lwn.net/}). There is a large number of books and resources on the more general topic of Linux kernel internals. diff --git a/locking/locking.tex b/locking/locking.tex index 35d8d9fda373..9cf945ff7b72 100644 --- a/locking/locking.tex +++ b/locking/locking.tex @@ -505,7 +505,7 @@ prevents hangs due to lost wakeups. } \QuickQuizEnd In short, if you find yourself exporting an API with a pointer to a -lock as an argument or the return value, do youself a favor and carefully +lock as an argument or the return value, do yourself a favor and carefully reconsider your API design. It might well be the right thing to do, but experience indicates that this is unlikely. @@ -1553,7 +1553,7 @@ concurrent callers, we need at most one of them to actually invoke painlessly as possible) give up and leave. To this end, each pass through the loop spanning lines~7-15 attempts -to advance up one level in the \co{rcu_node} hierarcy. +to advance up one level in the \co{rcu_node} hierarchy. If the \co{gp_flags} variable is already set (line~8) or if the attempt to acquire the current \co{rcu_node} structure's \co{->fqslock} is unsuccessful (line~9), then local variable \co{ret} is set to 1. diff --git a/owned/owned.tex b/owned/owned.tex index b37f0c49f58d..81e0d236e168 100644 --- a/owned/owned.tex +++ b/owned/owned.tex @@ -316,7 +316,7 @@ from \co{read_count()}. One approach is for \co{read_count()} to add the value of its own per-thread variable. This maintains full ownership and performance, but only - a slight improvement in accuracy, particulary on systems + a slight improvement in accuracy, particularly on systems with very large numbers of threads. Another approach is for \co{read_count()} to use function diff --git a/rt/rt.tex b/rt/rt.tex index b35bd1daf156..21a53a4514b8 100644 --- a/rt/rt.tex +++ b/rt/rt.tex @@ -735,7 +735,7 @@ Finally, the bottom row shows a diagram of the Linux kernel with the -rt patchset applied, maximizing real-time capabilities. Functionality from the -rt patchset is added to mainline, hence the increasing capabilities of the mainline Linux kernel over time. -Neverthless, the most demanding real-time applications continue to use +Nevertheless, the most demanding real-time applications continue to use the -rt patchset. The non-preemptible kernel shown at the top of @@ -1407,7 +1407,7 @@ workloads and fix any real-time bugs.\footnote{ A sixth source of OS jitter is provided by some in-kernel full-system synchronization algorithms, perhaps most notably the global TLB-flush algorithm. -This can be avoided by avoiding memory-unmapping operations, and expecially +This can be avoided by avoiding memory-unmapping operations, and especially avoiding unmapping operations within the kernel. As of early 2015, the way to avoid in-kernel unmapping operations is to avoid unloading kernel modules. -- 2.9.0 -- To unsubscribe from this list: send the line "unsubscribe perfbook" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html