Re: is __copy_to_user_inatomic is really atomic ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/22/07, Arjan van de Ven <arjan@xxxxxxxxxxxxx> wrote:
On Thu, 2007-02-22 at 18:02 +0530, Aneesh Kumar wrote:
> On 2/22/07, Arjan van de Ven <arjan@xxxxxxxxxxxxx> wrote:
> > On Wed, 2007-02-21 at 12:34 +0530, Aneesh Kumar wrote:
> > > Hi,
> > >
> > > Is __copy_from_user_inatomic and __copy_to_user_inatomic really atomic
> >
> > if you call the _inatomic version you, the caller, are supposed to have
> > pinned the memory first, to make sure it's not swapped out. If it is
> > anyway, you'll get a error return code, no attempt is made to fault the
> > page back in.
> >
> >
>
> I am trying to locate the code for that. As far as i can see both the
> version __copy_to_user and __copy_to_user_inatomic  doesn't have much
> difference.  Also i looked at do_page_fault and i can see it finding
> vma. So where is it coded that it will not fault the page back in for
> _inatomic version ?

it can't happen because the caller pinned the page in the first place..



Okey so the API implies that the caller has to pin the memory. That
makes sense. How about the patch as below

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxx>
diff --git a/include/asm-i386/uaccess.h b/include/asm-i386/uaccess.h
index 70829ae..e2aa5e0 100644
--- a/include/asm-i386/uaccess.h
+++ b/include/asm-i386/uaccess.h
@@ -397,7 +397,19 @@ unsigned long __must_check __copy_from_user_ll_nocache(void *to,
 unsigned long __must_check __copy_from_user_ll_nocache_nozero(void *to,
 				const void __user *from, unsigned long n);
 
-/*
+/**
+ * __copy_to_user_inatomic: - Copy a block of data into user space, with less checking.
+ * @to:   Destination address, in user space.
+ * @from: Source address, in kernel space.
+ * @n:    Number of bytes to copy.
+ *
+ * Context: User context only.
+ *
+ * Copy data from kernel space to user space.  Caller must check
+ * the specified block with access_ok() before calling this function.
+ * The caller should also make sure he pins the user space address
+ * so that the we don't result in page fault and sleep.
+ *
  * Here we special-case 1, 2 and 4-byte copy_*_user invocations.  On a fault
  * we return the initial request size (1, 2 or 4), as copy_*_user should do.
  * If a store crosses a page boundary and gets a fault, the x86 will not write

[Index of Archives]     [Newbies FAQ]     [Linux Kernel Mentors]     [Linux Kernel Development]     [IETF Annouce]     [Git]     [Networking]     [Security]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux SCSI]     [Linux ACPI]
  Powered by Linux