On Wed, Nov 23, 2022, Dave Hansen wrote: > On 11/23/22 09:37, Sean Christopherson wrote: > > On Wed, Nov 23, 2022, Dave Hansen wrote: > >> There's no way we can guarantee _that_. For one, the PAMT* allocations > >> can always fail. I guess we could ask sysadmins to fire up a guest to > >> "prime" things, but that seems a little silly. Maybe that would work as > >> the initial implementation that we merge, but I suspect our users will > >> demand more determinism, maybe a boot or module parameter. > > Oh, you mean all of TDX initialization? I thought "initialization" here mean just > > doing tdx_enable(). > > Yes, but the first call to tdx_enable() does TDH_SYS_INIT and all the > subsequent work to get the module going. Ah, sorry, I misread the diff. Actually applied the patches this time... > > Yeah, that's not going to be a viable option. Aside from lacking determinisim, > > it would be all too easy to end up on a system with fragmented memory that can't > > allocate the PAMTs post-boot. > > For now, the post-boot runtime PAMT allocations are the one any only way > that TDX can be initialized. I pushed for it to be done this way. > Here's why: > > Doing tdx_enable() is relatively slow and it eats up a non-zero amount > of physically contiguous RAM for metadata (~1/256th or ~0.4% of RAM). > Systems that support TDX but will never run TDX guests should not pay > that cost. > > That means that we either make folks opt-in at boot-time or we try to > make a best effort at runtime to do the metadata allocations. > > From my perspective, the best-effort stuff is absolutely needed. Users > are going to forget the command-line opt in Eh, any sufficiently robust deployment should be able to ensure that its kernels boot with "required" command-line options. > and there's no harm in _trying_ the big allocations even if they fail. No, but there is "harm" if a host can't provide the functionality the control plane thinks it supports. > Second, in reality, the "real" systems that can run TDX guests are > probably not going to sit around fragmenting memory for a month before > they run their first guest. They're going to run one shortly after they > boot when memory isn't fragmented and the best-effort allocation will > work really well. I don't think this will hold true. Long term, we (Google) want to have a common pool for non-TDX and TDX VMs. Forcing TDX VMs to use a dedicated pool of hosts makes it much more difficult to react to demand, e.g. if we carve out N hosts for TDX, but only use 10% of those hosts, then that's a lot of wasted capacity/money. IIRC, people have discussed dynamically reconfiguring hosts so that systems could be moved in/out of a dedicated pool, but that's still suboptimal, e.g. would require emptying a host to reboot+reconfigure.. If/when we end up with a common pool, then it's very likely that we could have a TDX-capable host go weeks/months before launching its first TDX VM. > Third, if anyone *REALLY* cared to make it reliable *and* wanted to sit > around fragmenting memory for a month, they could just start a TDX guest > and kill it to get TDX initialized. This isn't ideal. But, to me, it > beats defining some new, separate ABI (or boot/module option) to do it. That's a hack. I have no objection to waiting until KVM is _loaded_ to initialize TDX, but waiting until KVM_CREATE_VM is not acceptable. Use cases aside, KVM's ABI would be a mess, e.g. KVM couldn't use KVM_CHECK_EXTENSION or any other /dev/kvm ioctl() to enumerate TDX support. > So, let's have those discussions. Long-term, what *is* the most > reliable way to get the TDX module loaded with 100% determinism? What > new ABI or interfaces are needed? Also, is that 100% determinism > required the moment this series is merged? Or, can we work up to it? I don't think we (Google again) strictly need 100% determinism with respect to enabling TDX, what's most important is that if a host says it's TDX-capable, then it really is TDX-capable. I'm sure we'll treat "failure to load" as an error, but it doesn't necessarily need to be a fatal error as the host can still run in a degraded state (no idea if we'll actually do that though). > I think it can wait until this particular series is farther along. For an opt-in kernel param, agreed. That can be added later, e.g. if it turns out that the PAMT allocation failure rate is too high.