On Wed, Feb 19, 2025 at 10:08:20PM +0000, Yosry Ahmed wrote: > This series removes X86_FEATURE_USE_IBPB, and fixes a KVM nVMX bug in > the process. The motivation is mostly the confusing name of > X86_FEATURE_USE_IBPB, which sounds like it controls IBPBs in general, > but it only controls IBPBs for spectre_v2_mitigation. A side effect of > this confusion is the nVMX bug, where virtualizing IBRS correctly > depends on the spectre_v2_user mitigation. > > The feature bit is mostly redundant, except in controlling the IBPB in > the vCPU load path. For that, a separate static branch is introduced, > similar to switch_mm_*_ibpb. Thanks for doing this. A few months ago I was working on patches to fix the same thing but I got preempted multiple times over. > I wanted to do more, but decided to stay conservative. I was mainly > hoping to merge indirect_branch_prediction_barrier() with entry_ibpb() > to have a single IBPB primitive that always stuffs the RSB if the IBPB > doesn't, but this would add some overhead in paths that currently use > indirect_branch_prediction_barrier(), and I was not sure if that's > acceptable. We always rely on IBPB clearing RSB, so yes, I'd say that's definitely needed. In fact I had a patch to do exactly that, with it ending up like this: static inline void indirect_branch_prediction_barrier(void) { asm volatile(ALTERNATIVE("", "call write_ibpb", X86_FEATURE_IBPB) : ASM_CALL_CONSTRAINT : : "rax", "rcx", "rdx", "memory"); } I also renamed "entry_ibpb" -> "write_ibpb" since it's no longer just for entry code. -- Josh