On Mon, 2012-04-16 at 17:34 +0200, Oleg Nesterov wrote: > On 04/16, Peter Zijlstra wrote: > > > > Can't we 'optimize' read_opcode() by doing the pagefault_disable() + > > __copy_from_user_inatomic() optimistically before going down the whole > > gup()+lock+kmap path? > > Unlikely, the task is not current. Easy enough to test that though.. and that should make the regular path fast enough, no? --- kernel/events/uprobes.c | 9 +++++++++ 1 files changed, 9 insertions(+), 0 deletions(-) diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c index 985be4d..7f5d8c5 100644 --- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -312,6 +312,15 @@ static int read_opcode(struct mm_struct *mm, unsigned long vaddr, uprobe_opcode_ void *vaddr_new; int ret; + if (mm == current->mm) { + pagefault_disable(); + ret = __copy_from_user_inatomic(opcode, (void __user *)vaddr, + sizeof(*opcode)); + pagefault_enable(); + if (!ret) + return 0; + } + ret = get_user_pages(NULL, mm, vaddr, 1, 0, 0, &page, NULL); if (ret <= 0) return ret; -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/ Don't email: <a href