Re: TLB flushes on fixmap changes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



at 11:45 AM, Andy Lutomirski <luto@xxxxxxxxxx> wrote:

> On Mon, Aug 27, 2018 at 10:34 AM, Nadav Amit <nadav.amit@xxxxxxxxx> wrote:
>> at 1:05 AM, Masami Hiramatsu <mhiramat@xxxxxxxxxx> wrote:
>> 
>>> On Sun, 26 Aug 2018 20:26:09 -0700
>>> Nadav Amit <nadav.amit@xxxxxxxxx> wrote:
>>> 
>>>> at 8:03 PM, Masami Hiramatsu <mhiramat@xxxxxxxxxx> wrote:
>>>> 
>>>>> On Sun, 26 Aug 2018 11:09:58 +0200
>>>>> Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>>>>> 
>>>>>> On Sat, Aug 25, 2018 at 09:21:22PM -0700, Andy Lutomirski wrote:
>>>>>>> I just re-read text_poke().  It's, um, horrible.  Not only is the
>>>>>>> implementation overcomplicated and probably buggy, but it's SLOOOOOW.
>>>>>>> It's totally the wrong API -- poking one instruction at a time
>>>>>>> basically can't be efficient on x86.  The API should either poke lots
>>>>>>> of instructions at once or should be text_poke_begin(); ...;
>>>>>>> text_poke_end();.
>>>>>> 
>>>>>> I don't think anybody ever cared about performance here. Only
>>>>>> correctness. That whole text_poke_bp() thing is entirely tricky.
>>>>> 
>>>>> Agreed. Self modification is a special event.
>>>>> 
>>>>>> FWIW, before text_poke_bp(), text_poke() would only be used from
>>>>>> stop_machine, so all the other CPUs would be stuck busy-waiting with
>>>>>> IRQs disabled. These days, yeah, that's lots more dodgy, but yes
>>>>>> text_mutex should be serializing all that.
>>>>> 
>>>>> I'm still not sure that speculative page-table walk can be done
>>>>> over the mutex. Also, if the fixmap area is for aliasing
>>>>> pages (which always mapped to memory), what kind of
>>>>> security issue can happen?
>>>> 
>>>> The PTE is accessible from other cores, so just as we assume for L1TF that
>>>> the every addressable memory might be cached in L1, we should assume and
>>>> PTE might be cached in the TLB when it is present.
>>> 
>>> Ok, so other cores can accidentally cache the PTE in TLB, (and no way
>>> to shoot down explicitly?)
>> 
>> There is way (although current it does not). But it seems that the consensus
>> is that it is better to avoid it being mapped at all in remote cores.
>> 
>>>> Although the mapping is for an alias, there are a couple of issues here.
>>>> First, this alias mapping is writable, so it might an attacker to change the
>>>> kernel code (following another initial attack).
>>> 
>>> Combined with some buffer overflow, correct? If the attacker already can
>>> write a kernel data directly, he is in the kernel mode.
>> 
>> Right.
>> 
>>>> Second, the alias mapping is
>>>> never explicitly flushed. We may assume that once the original mapping is
>>>> removed/changed, a full TLB flush would take place, but there is no
>>>> guarantee it actually takes place.
>>> 
>>> Hmm, would this means a full TLB flush will not flush alias mapping?
>>> (or, the full TLB flush just doesn't work?)
>> 
>> It will flush the alias mapping, but currently there is no such explicit
>> flush.
>> 
>>>>> Anyway, from the viewpoint of kprobes, either per-cpu fixmap or
>>>>> changing CR3 sounds good to me. I think we don't even need per-cpu,
>>>>> it can call a thread/function on a dedicated core (like the first
>>>>> boot processor) and wait :) This may prevent leakage of pte change
>>>>> to other cores.
>>>> 
>>>> I implemented per-cpu fixmap, but I think that it makes more sense to take
>>>> peterz approach and set an entry in the PGD level. Per-CPU fixmap either
>>>> requires to pre-populate various levels in the page-table hierarchy, or
>>>> conditionally synchronize whenever module memory is allocated, since they
>>>> can share the same PGD, PUD & PMD. While usually the synchronization is not
>>>> needed, the possibility that synchronization is needed complicates locking.
>>> 
>>> Could you point which PeterZ approach you said? I guess it will be
>>> make a clone of PGD and use it for local page mapping (as new mm).
>>> If so, yes it sounds perfectly fine to me.
>> 
>> The thread is too long. What I think is best is having a mapping in the PGD
>> level. I’ll try to give it a shot, and see what I get.
>> 
>>>> Anyhow, having fixed addresses for the fixmap can be used to circumvent
>>>> KASLR.
>>> 
>>> I think text_poke doesn't mind using random address :)
>>> 
>>>> I don’t think a dedicated core is needed. Anyhow there is a lock
>>>> (text_mutex), so use_mm() can be used after acquiring the mutex.
>>> 
>>> Hmm, use_mm() said;
>>> 
>>> /*
>>> * use_mm
>>> *      Makes the calling kernel thread take on the specified
>>> *      mm context.
>>> *      (Note: this routine is intended to be called only
>>> *      from a kernel thread context)
>>> */
>>> 
>>> So maybe we need a dedicated kernel thread for safeness?
>> 
>> Yes, it says so. But I am not sure it cannot be changed, at least for this
>> specific use-case. Switching kernel threads just for patching seems to me as
>> an overkill.
>> 
>> Let me see if I can get something half-reasonable doing so...
> 
> I don't understand at all how a kernel thread helps.  The useful bit
> is to have a dedicated mm, which would involve setting up an mm_struct
> and mapping the kernel and module text, EFI-style, in the user portion
> of the mm.  But, to do the text_poke(), we'd just use the mm *without
> calling use_mm*.
> 
> In other words, the following sequence should be (almost) just fine:
> 
> typedef struct {
>  struct mm_struct *prev;
> } temporary_mm_state_t;
> 
> temporary_mm_state_t use_temporary_mm(struct mm_struct *mm)
> {
>    temporary_mm_state_t state;
> 
>    lockdep_assert_irqs_disabled();
>    state.prev = this_cpu_read(cpu_tlbstate.loaded_mm);
>    switch_mm_irqs_off(NULL, mm, current);
> }
> 
> void unuse_temporary_mm(temporary_mm_state_t prev)
> {
>    lockdep_assert_irqs_disabled();
>    switch_mm_irqs_off(NULL, prev.prev, current);
> }
> 
> The only thing wrong with this that I can see is that it interacts
> poorly with perf.  But perf is *already* busted in this regard.  The
> following (whitespace damaged, sorry) should fix it:
> 
> commit b62bff5a8406d252de752cfe75068d0b73b9cdf0
> Author: Andy Lutomirski <luto@xxxxxxxxxx>
> Date:   Mon Aug 27 11:41:55 2018 -0700
> 
>    x86/nmi: Fix some races in NMI uaccess
> 
>    In NMI context, we might be in the middle of context switching or in
>    the middle of switch_mm_irqs_off().  In either case, CR3 might not
>    match current->mm, which could cause copy_from_user_nmi() and
>    friends to read the wrong memory.
> 
>    Fix it by adding a new nmi_uaccess_okay() helper and checking it in
>    copy_from_user_nmi() and in __copy_from_user_nmi()'s callers.
> 
>    Signed-off-by: Andy Lutomirski <luto@xxxxxxxxxx>
> 
> diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
> index 5f4829f10129..dfb2f7c0d019 100644
> --- a/arch/x86/events/core.c
> +++ b/arch/x86/events/core.c
> @@ -2465,7 +2465,7 @@ perf_callchain_user(struct
> perf_callchain_entry_ctx *entry, struct pt_regs *regs
> 
>     perf_callchain_store(entry, regs->ip);
> 
> -    if (!current->mm)
> +    if (!nmi_uaccess_okay())
>         return;
> 
>     if (perf_callchain_user32(regs, entry))
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index 89a73bc31622..b23b2625793b 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -230,6 +230,22 @@ struct tlb_state {
> };
> DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state, cpu_tlbstate);
> 
> +/*
> + * Blindly accessing user memory from NMI context can be dangerous
> + * if we're in the middle of switching the current user task or
> + * switching the loaded mm.  It can also be dangerous if we
> + * interrupted some kernel code that was temporarily using a
> + * different mm.
> + */
> +static inline bool nmi_uaccess_okay(void)
> +{
> +    struct mm_struct *loaded_mm = this_cpu_read(cpu_tlbstate.loaded_mm);
> +    struct mm_struct *current_mm = current->mm;
> +
> +    return current_mm && loaded_mm == current_mm &&
> +        loaded_mm->pgd == __va(read_cr3_pa());
> +}
> +
> /* Initialize cr4 shadow for this CPU. */
> static inline void cr4_init_shadow(void)
> {
> diff --git a/arch/x86/lib/usercopy.c b/arch/x86/lib/usercopy.c
> index c8c6ad0d58b8..c5f758430be2 100644
> --- a/arch/x86/lib/usercopy.c
> +++ b/arch/x86/lib/usercopy.c
> @@ -19,6 +19,9 @@ copy_from_user_nmi(void *to, const void __user
> *from, unsigned long n)
>     if (__range_not_ok(from, n, TASK_SIZE))
>         return n;
> 
> +    if (!nmi_uaccess_okay())
> +        return n;
> +
>     /*
>      * Even though this function is typically called from NMI/IRQ context
>      * disable pagefaults so that its behaviour is consistent even when
> diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
> index 457b281b9339..f4b41d5a93dd 100644
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -345,6 +345,9 @@ void switch_mm_irqs_off(struct mm_struct *prev,
> struct mm_struct *next,
>          */
>         trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
>     } else {
> +        /* Let NMI code know that CR3 may not match expectations. */
> +        this_cpu_write(cpu_tlbstate.loaded_mm, NULL);
> +
>         /* The new ASID is already up to date. */
>         load_new_mm_cr3(next->pgd, new_asid, false);
> 
> What do you all think?

I agree in general. But I think that current->mm would need to be loaded, as
otherwise I am afraid it would break switch_mm_irqs_off().





[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux