On Thu, Oct 21, 2021 at 03:39:35PM -0300, Marcelo Tosatti wrote: > Peter, > > static __always_inline void arch_exit_to_user_mode(void) > { > mds_user_clear_cpu_buffers(); > } > > /** > * mds_user_clear_cpu_buffers - Mitigation for MDS and TAA vulnerability > * > * Clear CPU buffers if the corresponding static key is enabled > */ > static __always_inline void mds_user_clear_cpu_buffers(void) > { > if (static_branch_likely(&mds_user_clear)) > mds_clear_cpu_buffers(); > } > > We were discussing how to perform objtool style validation > that no code after the check for I'm not sure what the point of the above is... Were you trying to ask for validation that nothing runs after the mds_user_clear_cpu_buffer()? That isn't strictly true today, there's lockdep code after it. I can't recall why that order is as it is though. Pretty much everything in noinstr is magical, we just have to think harder there (and possibly start writing more comments there). > > + /* NMI happens here and must still do/finish CT_WORK_n */ > > + sync_core(); > > But after the discussion with you, it seems doing the TLB checking > and (also sync_core) checking very late/very early on exit/entry > makes things easier to review. I don't know about late, it must happen *very* early in entry. The sync_core() must happen before any self-modifying code gets called (static_branch, static_call, etc..) with possible exception of the context_tracking static_branch. The TLBi must also happen super early, possibly while still on the entry stack (since the task stack is vmap'ed). We currently don't run C code on the entry stack, that needs quite a bit of careful work to make happen. > Can then use a single atomic variable with USER/KERNEL state and cmpxchg > loops. We're not going to add an atomic to context tracking. There is one, we just got to extract/share it with RCU.