Re: Patch "x86/mm: Give each mm TLB flush generation a unique ID" has been added to the 4.9-stable tree

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 03/07/2018 09:37 AM, gregkh@xxxxxxxxxxxxxxxxxxx wrote:
> 
> This is a note to let you know that I've just added the patch titled
> 
>     x86/mm: Give each mm TLB flush generation a unique ID
> 
> to the 4.9-stable tree which can be found at:
>     http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
> 
> The filename of the patch is:
>      x86-mm-give-each-mm-tlb-flush-generation-a-unique-id.patch
> and it can be found in the queue-4.9 subdirectory.
> 
> If you, or anyone else, feels it should not be added to the stable tree,
> please let <stable@xxxxxxxxxxxxxxx> know about it.
> 
> 
> From f39681ed0f48498b80455095376f11535feea332 Mon Sep 17 00:00:00 2001
> From: Andy Lutomirski <luto@xxxxxxxxxx>
> Date: Thu, 29 Jun 2017 08:53:15 -0700
> Subject: x86/mm: Give each mm TLB flush generation a unique ID
> 
> From: Andy Lutomirski <luto@xxxxxxxxxx>
> 
> commit f39681ed0f48498b80455095376f11535feea332 upstream.
> 
> This adds two new variables to mmu_context_t: ctx_id and tlb_gen.
> ctx_id uniquely identifies the mm_struct and will never be reused.
> For a given mm_struct (and hence ctx_id), tlb_gen is a monotonic

Greg, I've only pulled the unique ctx_id part of the original patch and not the
tlb_gen changes. The unique ctx_id is a very simple change
without needing other related changes that tlb_gen will require.

You may want to update the comment to reflect this.

> count of the number of times that a TLB flush has been requested.
> The pair (ctx_id, tlb_gen) can be used as an identifier for TLB
> flush actions and will be used in subsequent patches to reliably
> determine whether all needed TLB flushes have occurred on a given
> CPU.
> 
> This patch is split out for ease of review.  By itself, it has no
> real effect other than creating and updating the new variables.
> 
> Signed-off-by: Andy Lutomirski <luto@xxxxxxxxxx>
> Reviewed-by: Nadav Amit <nadav.amit@xxxxxxxxx>
> Reviewed-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: Arjan van de Ven <arjan@xxxxxxxxxxxxxxx>
> Cc: Borislav Petkov <bp@xxxxxxxxx>
> Cc: Dave Hansen <dave.hansen@xxxxxxxxx>
> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
> Cc: Mel Gorman <mgorman@xxxxxxx>
> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Cc: Rik van Riel <riel@xxxxxxxxxx>
> Cc: linux-mm@xxxxxxxxx
> Link: http://lkml.kernel.org/r/413a91c24dab3ed0caa5f4e4d017d87b0857f920.1498751203.git.luto@xxxxxxxxxx
> Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
> Signed-off-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
> 
> ---
>  arch/x86/include/asm/mmu.h         |   15 +++++++++++++--
>  arch/x86/include/asm/mmu_context.h |    5 +++++
>  arch/x86/mm/tlb.c                  |    2 ++
>  3 files changed, 20 insertions(+), 2 deletions(-)
> 
> --- a/arch/x86/include/asm/mmu.h
> +++ b/arch/x86/include/asm/mmu.h
> @@ -3,12 +3,18 @@
>  
>  #include <linux/spinlock.h>
>  #include <linux/mutex.h>
> +#include <linux/atomic.h>
>  
>  /*
> - * The x86 doesn't have a mmu context, but
> - * we put the segment information here.
> + * x86 has arch-specific MMU state beyond what lives in mm_struct.
>   */
>  typedef struct {
> +	/*
> +	 * ctx_id uniquely identifies this mm_struct.  A ctx_id will never
> +	 * be reused, and zero is not a valid ctx_id.
> +	 */
> +	u64 ctx_id;
> +
>  #ifdef CONFIG_MODIFY_LDT_SYSCALL
>  	struct ldt_struct *ldt;
>  #endif
> @@ -33,6 +39,11 @@ typedef struct {
>  #endif
>  } mm_context_t;
>  
> +#define INIT_MM_CONTEXT(mm)						\
> +	.context = {							\
> +		.ctx_id = 1,						\
> +	}
> +
>  void leave_mm(int cpu);
>  
>  #endif /* _ASM_X86_MMU_H */
> --- a/arch/x86/include/asm/mmu_context.h
> +++ b/arch/x86/include/asm/mmu_context.h
> @@ -12,6 +12,9 @@
>  #include <asm/tlbflush.h>
>  #include <asm/paravirt.h>
>  #include <asm/mpx.h>
> +
> +extern atomic64_t last_mm_ctx_id;
> +
>  #ifndef CONFIG_PARAVIRT
>  static inline void paravirt_activate_mm(struct mm_struct *prev,
>  					struct mm_struct *next)
> @@ -106,6 +109,8 @@ static inline void enter_lazy_tlb(struct
>  static inline int init_new_context(struct task_struct *tsk,
>  				   struct mm_struct *mm)
>  {
> +	mm->context.ctx_id = atomic64_inc_return(&last_mm_ctx_id);
> +
>  	#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
>  	if (cpu_feature_enabled(X86_FEATURE_OSPKE)) {
>  		/* pkey 0 is the default and always allocated */
> --- a/arch/x86/mm/tlb.c
> +++ b/arch/x86/mm/tlb.c
> @@ -29,6 +29,8 @@
>   *	Implement flush IPI by CALL_FUNCTION_VECTOR, Alex Shi
>   */
>  
> +atomic64_t last_mm_ctx_id = ATOMIC64_INIT(1);
> +
>  struct flush_tlb_info {
>  	struct mm_struct *flush_mm;
>  	unsigned long flush_start;
> 
> 
> Patches currently in stable-queue which might be from luto@xxxxxxxxxx are
> 
> queue-4.9/nospec-allow-index-argument-to-have-const-qualified-type.patch
> queue-4.9/x86-speculation-use-indirect-branch-prediction-barrier-in-context-switch.patch
> queue-4.9/x86-mm-give-each-mm-tlb-flush-generation-a-unique-id.patch
> 




[Index of Archives]     [Linux Kernel]     [Kernel Development Newbies]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite Hiking]     [Linux Kernel]     [Linux SCSI]