On Tue, Dec 12, 2017 at 9:32 AM, Thomas Gleixner <tglx@xxxxxxxxxxxxx> wrote: > From: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > > When the LDT is mapped RO, the CPU will write fault the first time it uses > a segment descriptor in order to set the ACCESS bit (for some reason it > doesn't always observe that it already preset). Catch the fault and set the > ACCESS bit in the handler. > > Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx> > --- > arch/x86/include/asm/mmu_context.h | 7 +++++++ > arch/x86/kernel/ldt.c | 30 ++++++++++++++++++++++++++++++ > arch/x86/mm/fault.c | 19 +++++++++++++++++++ > 3 files changed, 56 insertions(+) > > --- a/arch/x86/include/asm/mmu_context.h > +++ b/arch/x86/include/asm/mmu_context.h > @@ -76,6 +76,11 @@ static inline void init_new_context_ldt( > int ldt_dup_context(struct mm_struct *oldmm, struct mm_struct *mm); > void ldt_exit_user(struct pt_regs *regs); > void destroy_context_ldt(struct mm_struct *mm); > +bool __ldt_write_fault(unsigned long address); > +static inline bool ldt_is_active(struct mm_struct *mm) > +{ > + return mm && mm->context.ldt != NULL; > +} > #else /* CONFIG_MODIFY_LDT_SYSCALL */ > static inline void init_new_context_ldt(struct task_struct *task, > struct mm_struct *mm) { } > @@ -86,6 +91,8 @@ static inline int ldt_dup_context(struct > } > static inline void ldt_exit_user(struct pt_regs *regs) { } > static inline void destroy_context_ldt(struct mm_struct *mm) { } > +static inline bool __ldt_write_fault(unsigned long address) { return false; } > +static inline bool ldt_is_active(struct mm_struct *mm) { return false; } > #endif > > static inline void load_mm_ldt(struct mm_struct *mm, struct task_struct *tsk) > --- a/arch/x86/kernel/ldt.c > +++ b/arch/x86/kernel/ldt.c > @@ -82,6 +82,36 @@ static void ldt_install_mm(struct mm_str > mutex_unlock(&mm->context.lock); > } > > +/* > + * ldt_write_fault() already checked whether there is an ldt installed in > + * __do_page_fault(), so it's safe to access it here because interrupts are > + * disabled and any ipi which would change it is blocked until this > + * returns. The underlying page mapping cannot change as long as the ldt > + * is the active one in the context. > + * > + * The fault error code is X86_PF_WRITE | X86_PF_PROT and checked in > + * __do_page_fault() already. This happens when a segment is selected and > + * the CPU tries to set the accessed bit in desc_struct.type because the > + * LDT entries are mapped RO. Set it manually. > + */ > +bool __ldt_write_fault(unsigned long address) > +{ > + struct ldt_struct *ldt = current->mm->context.ldt; > + unsigned long start, end, entry; > + struct desc_struct *desc; > + > + start = (unsigned long) ldt->entries; > + end = start + ldt->nr_entries * LDT_ENTRY_SIZE; > + > + if (address < start || address >= end) > + return false; > + > + desc = (struct desc_struct *) ldt->entries; > + entry = (address - start) / LDT_ENTRY_SIZE; > + desc[entry].type |= 0x01; You have another patch that unconditionally sets the accessed bit on installation. What gives? Also, this patch is going to die a horrible death if IRET ever hits this condition. Or load gs. -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@xxxxxxxxx. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@xxxxxxxxx"> email@xxxxxxxxx </a>