Re: [PATCH v19 023/130] KVM: TDX: Initialize the TDX module when loading the KVM intel kernel module

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Apr 25, 2024, Kai Huang wrote:
> On Thu, 2024-04-25 at 09:35 -0700, Sean Christopherson wrote:
> > On Tue, Apr 23, 2024, Kai Huang wrote:
> > > On Tue, 2024-04-23 at 08:15 -0700, Sean Christopherson wrote:
> > > > Presumably that approach relies on something blocking onlining CPUs when TDX is
> > > > active.  And if that's not the case, the proposed patches are buggy.
> > > 
> > > The current patch ([PATCH 023/130] KVM: TDX: Initialize the TDX module
> > > when loading the KVM intel kernel module) indeed is buggy, but I don't
> > > quite follow why we need to block onlining CPU  when TDX is active?
> > 
> > I was saying that based on my reading of the code, either (a) the code is buggy
> > or (b) something blocks onlining CPUs when TDX is active.  Sounds like the answer
> > is (a).
> 
> Yeah it's a).

...

> > > Being a module natually we will need to handle module init and exit.  But
> > > TDX cannot be disabled and re-enabled after initialization, so in general
> > > the vac.ko doesn't quite fit for TDX.
> > > 
> > > And I am not sure what's the fundamental difference between managing TDX
> > > module in a module vs in the core-kernel from KVM's perspective.
> > 
> > VAC isn't strictly necessary.  What I was saying is that it's not strictly
> > necessary for the core kernel to handle VMXON either.  I.e. it could be done in
> > something like VAC, or it could be done in the core kernel.
> 
> Right, but so far I cannot see any advantage of using a VAC module,
> perhaps I am missing something although.

Yeah, there's probably no big advantage.

> > The important thing is that they're handled by _one_ entity.  What we have today
> > is probably the worst setup; VMXON is handled by KVM, but TDX.SYS.LP.INIT is
> > handled by core kernel (sort of).
> 
> I cannot argue against this :-)
> 
> But from this point of view, I cannot see difference between tdx_enable()
> and tdx_cpu_enable(), because they both in core-kernel while depend on KVM
> to handle VMXON.

My comments were made under the assumption that the code was NOT buggy, i.e. if
KVM did NOT need to call tdx_cpu_enable() independent of tdx_enable().

That said, I do think it makes to have tdx_enable() call an private/inner version,
e.g. __tdx_cpu_enable(), and then have KVM call a public version.  Alternatively,
the kernel could register yet another cpuhp hook that runs after KVM's, i.e. does
TDX.SYS.LP.INIT after KVM has done VMXON (if TDX has been enabled).

> Or, do you prefer to we move VMXON to the core-kernel at this stage, i.e.,
> as a prepare work for KVM TDX? 

No, the amount of churn/work that would create is high, and TDX is already taking
on enough churn.




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux