> On Sep 12, 2018, at 8:30 AM, Rik van Riel <riel@xxxxxxxxxxx> wrote: > > On Wed, 2018-09-12 at 08:20 -0700, Andy Lutomirski wrote: >>> >>> --- a/arch/x86/mm/pkeys.c >>> +++ b/arch/x86/mm/pkeys.c >>> @@ -18,6 +18,20 @@ >>> >>> #include <asm/cpufeature.h> /* boot_cpu_has, >>> ... */ >>> #include <asm/mmu_context.h> /* >>> vma_pkey() */ >>> +#include <asm/fpu/internal.h> >>> + >>> +void write_pkru(u32 pkru) >>> +{ >>> + if (!boot_cpu_has(X86_FEATURE_OSPKE)) >>> + return; >>> + >>> + current->thread.fpu.pkru = pkru; >>> + >> >> I thought that the offset of PKRU in the xstate was fixed after boot. > > You are right, it is. However, that offset would need > to be stored somewhere, and the value read every time > we wanted to read or store the PKRU value from/to the > floating point state. > > I suspect that would not be any faster than keeping a > copy of the PKRU value in a known location. > >> Anyway, as written, this needs a lockdep assertion that we’re not >> preemptible, an explicit preempt_disable(), or a comment explaining >> why it’s okay if we get preempted in this function. >> >>> + __fpregs_changes_begin(); > > This handles the preemption disabling, see patch > 3 of the series. Sure, but the first write is *before* this. So we can be preempted with the two copies of PKRU being out of sync. > >>> + __fpregs_load_activate(¤t->thread.fpu, >>> smp_processor_id()); >>> + __write_pkru(pkru); >>> + __fpregs_changes_end(); >>> +} >>> >>> int __execute_only_pkey(struct mm_struct *mm) >>> { >>> -- >>> 2.19.0 >>> >> >> > -- > All Rights Reversed.