[PATCH 3/3] iov_iter: Move internal documentation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Document the interfaces we want users to call (ie copy_mc_to_iter()
and copy_from_iter_flushcache()), not the internal interfaces.

Signed-off-by: Matthew Wilcox (Oracle) <willy@xxxxxxxxxxxxx>
---
 include/linux/uio.h | 41 +++++++++++++++++++++++++++++++++++++++++
 lib/iov_iter.c      | 41 -----------------------------------------
 2 files changed, 41 insertions(+), 41 deletions(-)

diff --git a/include/linux/uio.h b/include/linux/uio.h
index 9fbce8c92545..4e5ad9053c97 100644
--- a/include/linux/uio.h
+++ b/include/linux/uio.h
@@ -244,6 +244,22 @@ size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i);
 #define _copy_mc_to_iter _copy_to_iter
 #endif
 
+/**
+ * copy_from_iter_flushcache() - Write destination through cpu cache.
+ * @addr: destination kernel address
+ * @bytes: total transfer length
+ * @i: source iterator
+ *
+ * The pmem driver arranges for filesystem-dax to use this facility via
+ * dax_copy_from_iter() for ensuring that writes to persistent memory
+ * are flushed through the CPU cache. It is differentiated from
+ * _copy_from_iter_nocache() in that guarantees all data is flushed for
+ * all iterator types. The _copy_from_iter_nocache() only attempts to
+ * bypass the cache for the ITER_IOVEC case, and on some archs may use
+ * instructions that strand dirty-data in the cache.
+ *
+ * Return: Number of bytes copied (may be %0).
+ */
 static __always_inline __must_check
 size_t copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i)
 {
@@ -253,6 +269,31 @@ size_t copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i)
 		return _copy_from_iter_flushcache(addr, bytes, i);
 }
 
+/**
+ * copy_mc_to_iter() - Copy to iter with source memory error exception handling.
+ * @addr: source kernel address
+ * @bytes: total transfer length
+ * @i: destination iterator
+ *
+ * The pmem driver deploys this for the dax operation
+ * (dax_copy_to_iter()) for dax reads (bypass page-cache and the
+ * block-layer). Upon #MC read(2) aborts and returns EIO or the bytes
+ * successfully copied.
+ *
+ * The main differences between this and typical _copy_to_iter().
+ *
+ * * Typical tail/residue handling after a fault retries the copy
+ *   byte-by-byte until the fault happens again. Re-triggering machine
+ *   checks is potentially fatal so the implementation uses source
+ *   alignment and poison alignment assumptions to avoid re-triggering
+ *   hardware exceptions.
+ *
+ * * ITER_KVEC, ITER_PIPE, and ITER_BVEC can return short copies.
+ *   Compare to copy_to_iter() where only ITER_IOVEC attempts might return
+ *   a short copy.
+ *
+ * Return: Number of bytes copied (may be %0).
+ */
 static __always_inline __must_check
 size_t copy_mc_to_iter(void *addr, size_t bytes, struct iov_iter *i)
 {
diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index 66a740e6e153..03b0e1dac27e 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -715,31 +715,6 @@ static size_t copy_mc_pipe_to_iter(const void *addr, size_t bytes,
 	return xfer;
 }
 
-/**
- * _copy_mc_to_iter - copy to iter with source memory error exception handling
- * @addr: source kernel address
- * @bytes: total transfer length
- * @i: destination iterator
- *
- * The pmem driver deploys this for the dax operation
- * (dax_copy_to_iter()) for dax reads (bypass page-cache and the
- * block-layer). Upon #MC read(2) aborts and returns EIO or the bytes
- * successfully copied.
- *
- * The main differences between this and typical _copy_to_iter().
- *
- * * Typical tail/residue handling after a fault retries the copy
- *   byte-by-byte until the fault happens again. Re-triggering machine
- *   checks is potentially fatal so the implementation uses source
- *   alignment and poison alignment assumptions to avoid re-triggering
- *   hardware exceptions.
- *
- * * ITER_KVEC, ITER_PIPE, and ITER_BVEC can return short copies.
- *   Compare to copy_to_iter() where only ITER_IOVEC attempts might return
- *   a short copy.
- *
- * Return: number of bytes copied (may be %0)
- */
 size_t _copy_mc_to_iter(const void *addr, size_t bytes, struct iov_iter *i)
 {
 	if (unlikely(iov_iter_is_pipe(i)))
@@ -789,22 +764,6 @@ size_t _copy_from_iter_nocache(void *addr, size_t bytes, struct iov_iter *i)
 EXPORT_SYMBOL(_copy_from_iter_nocache);
 
 #ifdef CONFIG_ARCH_HAS_UACCESS_FLUSHCACHE
-/**
- * _copy_from_iter_flushcache - write destination through cpu cache
- * @addr: destination kernel address
- * @bytes: total transfer length
- * @i: source iterator
- *
- * The pmem driver arranges for filesystem-dax to use this facility via
- * dax_copy_from_iter() for ensuring that writes to persistent memory
- * are flushed through the CPU cache. It is differentiated from
- * _copy_from_iter_nocache() in that guarantees all data is flushed for
- * all iterator types. The _copy_from_iter_nocache() only attempts to
- * bypass the cache for the ITER_IOVEC case, and on some archs may use
- * instructions that strand dirty-data in the cache.
- *
- * Return: number of bytes copied (may be %0)
- */
 size_t _copy_from_iter_flushcache(void *addr, size_t bytes, struct iov_iter *i)
 {
 	if (unlikely(iov_iter_is_pipe(i))) {
-- 
2.33.0




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux