On Mon, 15 Jan 2024 at 19:44, Alexander Potapenko <glider@xxxxxxxxxx> wrote: > > Cc: "Paul E. McKenney" <paulmck@xxxxxxxxxx> > Cc: Marco Elver <elver@xxxxxxxxxx> > Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx> > Cc: kasan-dev@xxxxxxxxxxxxxxxx > Cc: Ilya Leoshkevich <iii@xxxxxxxxxxxxx> > Cc: Nicholas Miehlbradt <nicholas@xxxxxxxxxxxxx> > > Hi folks, > > (adding KMSAN reviewers and IBM people who are currently porting KMSAN to other > architectures, plus Paul for his opinion on refactoring RCU) > > this patch broke x86 KMSAN in a subtle way. > > For every memory access in the code instrumented by KMSAN we call > kmsan_get_metadata() to obtain the metadata for the memory being accessed. For > virtual memory the metadata pointers are stored in the corresponding `struct > page`, therefore we need to call virt_to_page() to get them. > > According to the comment in arch/x86/include/asm/page.h, virt_to_page(kaddr) > returns a valid pointer iff virt_addr_valid(kaddr) is true, so KMSAN needs to > call virt_addr_valid() as well. > > To avoid recursion, kmsan_get_metadata() must not call instrumented code, > therefore ./arch/x86/include/asm/kmsan.h forks parts of arch/x86/mm/physaddr.c > to check whether a virtual address is valid or not. > > But the introduction of rcu_read_lock() to pfn_valid() added instrumented RCU > API calls to virt_to_page_or_null(), which is called by kmsan_get_metadata(), > so there is an infinite recursion now. I do not think it is correct to stop that > recursion by doing kmsan_enter_runtime()/kmsan_exit_runtime() in > kmsan_get_metadata(): that would prevent instrumented functions called from > within the runtime from tracking the shadow values, which might introduce false > positives. > > I am currently looking into inlining __rcu_read_lock()/__rcu_read_unlock(), into > KMSAN code to prevent it from being instrumented, but that might require factoring > out parts of kernel/rcu/tree_plugin.h into a non-private header. Do you think this > is feasible? __rcu_read_lock/unlock() is only outlined in PREEMPT_RCU. Not sure that helps. Otherwise, there is rcu_read_lock_sched_notrace() which does the bare minimum and is static inline. Does that help? > Another option is to cut some edges in the code calling virt_to_page(). First, > my observation is that virt_addr_valid() is quite rare in the kernel code, i.e. > not all cases of calling virt_to_page() are covered with it. Second, every > memory access to KMSAN metadata residing in virt_to_page(kaddr)->shadow always > accompanies an access to `kaddr` itself, so if there is a race on a PFN then > the access to `kaddr` will probably also trigger a fault. Third, KMSAN metadata > accesses are inherently non-atomic, and even if we ensure pfn_valid() is > returning a consistent value for a single memory access, calling it twice may > already return different results. > > Considering the above, how bad would it be to drop synchronization for KMSAN's > version of pfn_valid() called from kmsan_virt_addr_valid()?