During the work of KVM's dirty page logging optimization, we encountered the need of manipulating bitmaps in user space efficiantly. To achive this, we introduce a uaccess function for setting a bit in user space following Avi's suggestion. KVM is now using dirty bitmaps for live-migration and VGA. Although we need to update them from kernel side, copying them every time for updating the dirty log is a big bottleneck. Especially, we tested that zero-copy bitmap manipulation improves responses of GUI manipulations a lot. We also found one similar need in drivers/vhost/vhost.c in which the author implemented set_bit_to_user() locally using inefficient functions: see TODO at the top of that. Probably, this kind of need would be common for virtualization area. So we introduce a macro set_bit_user_non_atomic() following the implementation style of x86's uaccess functions. Note: there is a one restriction to this macro: bitmaps must be 64-bit aligned (see the comment in this patch). Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@xxxxxxxxxxxxx> Signed-off-by: Fernando Luis Vazquez Cao <fernando@xxxxxxxxxxxxx> CC: Avi Kivity <avi@xxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> CC: Ingo Molnar <mingo@xxxxxxxxxx> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx> --- arch/x86/include/asm/uaccess.h | 39 +++++++++++++++++++++++++++++++++++++++ 1 files changed, 39 insertions(+), 0 deletions(-) diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h index abd3e0e..3138e65 100644 --- a/arch/x86/include/asm/uaccess.h +++ b/arch/x86/include/asm/uaccess.h @@ -98,6 +98,45 @@ struct exception_table_entry { extern int fixup_exception(struct pt_regs *regs); +/** + * set_bit_user_non_atomic: - set a bit of a bitmap in user space. + * @nr: Bit offset. + * @addr: Base address of a bitmap in user space. + * + * Context: User context only. This function may sleep. + * + * This macro set a bit of a bitmap in user space. + * + * Restriction: the bitmap pointed to by @addr must be 64-bit aligned: + * the kernel accesses the bitmap by its own word length, so bitmaps + * allocated by 32-bit processes may cause fault. + * + * Returns zero on success, or -EFAULT on error. + */ +#define __set_bit_user_non_atomic_asm(nr, addr, err, errret) \ + asm volatile("1: bts %1,%2\n" \ + "2:\n" \ + ".section .fixup,\"ax\"\n" \ + "3: mov %3,%0\n" \ + " jmp 2b\n" \ + ".previous\n" \ + _ASM_EXTABLE(1b, 3b) \ + : "=r"(err) \ + : "r" (nr), "m" (__m(addr)), "i" (errret), "0" (err)) + +#define set_bit_user_non_atomic(nr, addr) \ +({ \ + int __ret_sbu; \ + \ + might_fault(); \ + if (access_ok(VERIFY_WRITE, addr, nr/8 + 1)) \ + __set_bit_user_non_atomic_asm(nr, addr, __ret_sbu, -EFAULT);\ + else \ + __ret_sbu = -EFAULT; \ + \ + __ret_sbu; \ +}) + /* * These are the main single-value transfer routines. They automatically * use the right size if we just have the right pointer type. -- 1.7.0.4 -- To unsubscribe from this list: send the line "unsubscribe kvm-ia64" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html