On Mon, Jan 29, 2018 at 08:17:02PM +0000, David Woodhouse wrote: > On Mon, 2018-01-29 at 18:14 -0200, Eduardo Habkost wrote: > > > > Sorry for being confused here, as probably the answer is buried > > on a LKML thread somewhere. The comment explains what the code > > does, but not why. Why exactly IBRS is preferred on Skylake? > > > > I'm asking this because I would like to understand the risks > > involved when running under a hypervisor exposing CPUID data that > > don't match the host CPU. e.g.: what happens if a VM is migrated > > from a Broadwell host to a Skylake host? > > https://lkml.org/lkml/2018/1/22/598 should cover most of that, I think. Thanks, it does answer some of my questions. So, it sounds like live-migration of a VM from a non-Skylake to a Skylake host will make the guest unsafe, unless the guest was explicitly configured to use IBRS. In a perfect world, Linux would never look at CPU family/model/stepping/microcode if running under a hypervisor, to take any decision. If Linux knows it's running under a hypervisor, it would be safer to assume retpolines aren't enough, unless the hypervisor is telling us otherwise. The question is how the hypervisor could tell that to the guest. If Intel doesn't give us a CPUID bit that can be used to tell that retpolines are enough, maybe we should use a hypervisor CPUID bit for that? -- Eduardo