[PATCH] Documentation: fix spell errors in vm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



These spell errors are found by spell check tool.

Signed-off-by: Wanlong Gao <gaowanlong@xxxxxxxxxxxxxx>
---
 Documentation/vm/balance        | 2 +-
 Documentation/vm/cleancache.txt | 4 ++--
 Documentation/vm/hwpoison.txt   | 4 ++--
 Documentation/vm/numa           | 2 +-
 Documentation/vm/slub.txt       | 2 +-
 5 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/Documentation/vm/balance b/Documentation/vm/balance
index c46e68c..ea9ab01 100644
--- a/Documentation/vm/balance
+++ b/Documentation/vm/balance
@@ -75,7 +75,7 @@ Page stealing from process memory and shm is done if stealing the page would
 alleviate memory pressure on any zone in the page's node that has fallen below
 its watermark.
 
-watemark[WMARK_MIN/WMARK_LOW/WMARK_HIGH]/low_on_memory/zone_wake_kswapd: These
+watermark[WMARK_MIN/WMARK_LOW/WMARK_HIGH]/low_on_memory/zone_wake_kswapd: These
 are per-zone fields, used to determine when a zone needs to be balanced. When
 the number of pages falls below watermark[WMARK_MIN], the hysteric field
 low_on_memory gets set. This stays set till the number of free pages becomes
diff --git a/Documentation/vm/cleancache.txt b/Documentation/vm/cleancache.txt
index 142fbb0..84c5a0d 100644
--- a/Documentation/vm/cleancache.txt
+++ b/Documentation/vm/cleancache.txt
@@ -249,7 +249,7 @@ gets removed/truncated.  So if cleancache used the inode kva,
 there would be potential coherency issues if/when the inode
 kva is reused for a different file.  Alternately, if cleancache
 invalidated the pages when the inode kva was freed, much of the value
-of cleancache would be lost because the cache of pages in cleanache
+of cleancache would be lost because the cache of pages in cleancache
 is potentially much larger than the kernel pagecache and is most
 useful if the pages survive inode cache removal.
 
@@ -264,7 +264,7 @@ global variable allows cleancache to be enabled by default at compile
 time, but have insignificant performance impact when cleancache remains
 disabled at runtime.
 
-9) Does cleanache work with KVM?
+9) Does cleancache work with KVM?
 
 The memory model of KVM is sufficiently different that a cleancache
 backend may have less value for KVM.  This remains to be tested,
diff --git a/Documentation/vm/hwpoison.txt b/Documentation/vm/hwpoison.txt
index 5500684..8dd03da 100644
--- a/Documentation/vm/hwpoison.txt
+++ b/Documentation/vm/hwpoison.txt
@@ -12,7 +12,7 @@ To quote the overview comment:
  * hardware as being corrupted usually due to a 2bit ECC memory or cache
  * failure.
  *
- * This focusses on pages detected as corrupted in the background.
+ * This focuses on pages detected as corrupted in the background.
  * When the current CPU tries to consume corruption the currently
  * running process can just be killed directly instead. This implies
  * that if the error cannot be handled for some reason it's safe to
@@ -43,7 +43,7 @@ of applications. KVM support requires a recent qemu-kvm release.
 For the KVM use there was need for a new signal type so that
 KVM can inject the machine check into the guest with the proper
 address. This in theory allows other applications to handle
-memory failures too. The expection is that near all applications
+memory failures too. The expectation is that near all applications
 won't do that, but some very specialized ones might.
 
 ---
diff --git a/Documentation/vm/numa b/Documentation/vm/numa
index ade0127..142d78c 100644
--- a/Documentation/vm/numa
+++ b/Documentation/vm/numa
@@ -60,7 +60,7 @@ In addition, for some architectures, again x86 is an example, Linux supports
 the emulation of additional nodes.  For NUMA emulation, linux will carve up
 the existing nodes--or the system memory for non-NUMA platforms--into multiple
 nodes.  Each emulated node will manage a fraction of the underlying cells'
-physical memory.  NUMA emluation is useful for testing NUMA kernel and
+physical memory.  NUMA emulation is useful for testing NUMA kernel and
 application features on non-NUMA platforms, and as a sort of memory resource
 management mechanism when used together with cpusets.
 [see Documentation/cgroups/cpusets.txt]
diff --git a/Documentation/vm/slub.txt b/Documentation/vm/slub.txt
index b0c6d1b..8fb37ca 100644
--- a/Documentation/vm/slub.txt
+++ b/Documentation/vm/slub.txt
@@ -64,7 +64,7 @@ to the dentry cache with
 
 Debugging options may require the minimum possible slab order to increase as
 a result of storing the metadata (for example, caches with PAGE_SIZE object
-sizes).  This has a higher liklihood of resulting in slab allocation errors
+sizes).  This has a higher likelihood of resulting in slab allocation errors
 in low memory situations or if there's high fragmentation of memory.  To
 switch off debugging for such caches by default, use
 
-- 
1.7.12.1.401.gb5d156c

--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux