Re: [PATCH v10 00/16] TDX host kernel support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2023-03-16 at 13:35 +0100, David Hildenbrand wrote:
> On 06.03.23 15:13, Kai Huang wrote:
> > Intel Trusted Domain Extensions (TDX) protects guest VMs from malicious
> > host and certain physical attacks.  TDX specs are available in [1].
> 
> I'm afraid there is no [1], probably got lost while resending :)
> 
> > 
> > This series is the initial support to enable TDX with minimal code to
> > allow KVM to create and run TDX guests.  KVM support for TDX is being
> > developed separately[2].  A new "userspace inaccessible memfd" approach
> > to support TDX private memory is also being developed[3].  The KVM will
> > only support the new "userspace inaccessible memfd" as TDX guest memory.
> 
> Same with [2].

Hi David,

Thanks for your feedback!

Oh sorry, yes indeed they were stripped unintentionally when I was updating the
cover letter.  I added here for your reference:

[1]: TDX specs
https://www.intel.com/content/www/us/en/developer/articles/technical/intel-trust-domain-extensions.html

2]: KVM TDX support
https://lore.kernel.org/lkml/cover.1678643051.git.isaku.yamahata@xxxxxxxxx/

[3]: KVM: mm: fd-based approach for supporting KVM
https://lore.kernel.org/lkml/20221202061347.1070246-1-chao.p.peng@xxxxxxxxxxxxxxx/T/

> 
> > 
> > This series doesn't aim to support all functionalities, and doesn't aim
> > to resolve all things perfectly.  For example, memory hotplug is handled
> > in simple way (please refer to "Kernel policy on TDX memory" and "Memory
> > hotplug" sections below).
> > 
> > (For memory hotplug, sorry for broadcasting widely but I cc'ed the
> > linux-mm@xxxxxxxxx following Kirill's suggestion so MM experts can also
> > help to provide comments.)
> > 
> > And TDX module metadata allocation just uses alloc_contig_pages() to
> > allocate large chunk at runtime, thus it can fail.  It is imperfect now
> > but _will_ be improved in the future.
> 
> Good enough for now I guess. Reserving it via memblock might be better, 
> though.
> 
> > 
> > Also, the patch to add the new kernel comline tdx="force" isn't included
> > in this initial version, as Dave suggested it isn't mandatory.  But I
> > _will_ add one once this initial version gets merged.
> 
> What would be the main purpose of that option?

Initializing the TDX module needs to consume non-trivial memory that is given to
the TDX module as metadata to track page status, etc.  Currently, KVM
maintainers want to initialize TDX during KVM module loading time.  This
basically means TDX will get enabled by default even people don't want to use
it.  So Peter wanted to add a kernel boot parameter to disable TDX for all.

> 
> > 
> > All other optimizations will be posted as follow-up once this initial
> > TDX support is upstreamed.
> > 
> 
> 
> [...]
> 
> > == Background ==
> > 
> > TDX introduces a new CPU mode called Secure Arbitration Mode (SEAM)
> > and a new isolated range pointed by the SEAM Ranger Register (SEAMRR).
> > A CPU-attested software module called 'the TDX module' runs in the new
> > isolated region as a trusted hypervisor to create/run protected VMs.
> > 
> > TDX also leverages Intel Multi-Key Total Memory Encryption (MKTME) to
> > provide crypto-protection to the VMs.  TDX reserves part of MKTME KeyIDs
> > as TDX private KeyIDs, which are only accessible within the SEAM mode.
> > 
> > TDX is different from AMD SEV/SEV-ES/SEV-SNP, which uses a dedicated
> > secure processor to provide crypto-protection.  The firmware runs on the
> > secure processor acts a similar role as the TDX module.
> > 
> > The host kernel communicates with SEAM software via a new SEAMCALL
> > instruction.  This is conceptually similar to a guest->host hypercall,
> > except it is made from the host to SEAM software instead.
> > 
> > Before being able to manage TD guests, the TDX module must be loaded
> > and properly initialized.  This series assumes the TDX module is loaded
> > by BIOS before the kernel boots.
> > 
> > How to initialize the TDX module is described at TDX module 1.0
> > specification, chapter "13.Intel TDX Module Lifecycle: Enumeration,
> > Initialization and Shutdown".
> > 
> > == Design Considerations ==
> > 
> > 1. Initialize the TDX module at runtime
> > 
> > There are basically two ways the TDX module could be initialized: either
> > in early boot, or at runtime before the first TDX guest is run.  This
> > series implements the runtime initialization.
> > 
> > This series adds a function tdx_enable() to allow the caller to initialize
> > TDX at runtime:
> > 
> >          if (tdx_enable())
> >                  goto no_tdx;
> > 	// TDX is ready to create TD guests.
> > 
> > This approach has below pros:
> > 
> > 1) Initializing the TDX module requires to reserve ~1/256th system RAM as
> > metadata.  Enabling TDX on demand allows only to consume this memory when
> > TDX is truly needed (i.e. when KVM wants to create TD guests).
> 
> Let's be clear: nobody is going to run encrypted VMs "out of the blue".
> 
> You can expect a certain hypervisor setup to be required, for example, 
> enabling it on the cmdline and then allocating that metadata from 
> memblock during boot.

Yes KVM will also have a parameter to specifically enable TDX.

> 
> IIRC s390x handles it similarly with protected VMs and required metadata.
> 
> > 
> > 2) SEAMCALL requires CPU being already in VMX operation (VMXON has been
> > done).  So far, KVM is the only user of TDX, and it already handles VMXON.
> > Letting KVM to initialize TDX avoids handling VMXON in the core kernel.
> > 
> > 3) It is more flexible to support "TDX module runtime update" (not in
> > this series).  After updating to the new module at runtime, kernel needs
> > to go through the initialization process again.
> > 
> > 2. CPU hotplug
> > 
> > TDX module requires the per-cpu initialization SEAMCALL (TDH.SYS.LP.INIT)
> > must be done on one cpu before any other SEAMCALLs can be made on that
> > cpu, including those involved during the module initialization.
> > 
> > The kernel provides tdx_cpu_enable() to let the user of TDX to do it when
> > the user wants to use a new cpu for TDX task.
> > 
> > TDX doesn't support physical (ACPI) CPU hotplug.  A non-buggy BIOS should
> > never support hotpluggable CPU devicee and/or deliver ACPI CPU hotplug
> > event to the kernel.  This series doesn't handle physical (ACPI) CPU
> > hotplug at all but depends on the BIOS to behave correctly.
> > 
> > Note TDX works with CPU logical online/offline, thus this series still
> > allows to do logical CPU online/offline.
> > 
> > 3. Kernel policy on TDX memory
> > 
> > The TDX module reports a list of "Convertible Memory Region" (CMR) to
> > indicate which memory regions are TDX-capable.  The TDX architecture
> > allows the VMM to designate specific convertible memory regions as usable
> > for TDX private memory.
> > 
> > The initial support of TDX guests will only allocate TDX private memory
> > from the global page allocator.  This series chooses to designate _all_
> > system RAM in the core-mm at the time of initializing TDX module as TDX
> > memory to guarantee all pages in the page allocator are TDX pages.
> > 
> > 4. Memory Hotplug
> > 
> > After the kernel passes all "TDX-usable" memory regions to the TDX
> > module, the set of "TDX-usable" memory regions are fixed during module's
> > runtime.  No more "TDX-usable" memory can be added to the TDX module
> > after that.
> > 
> > To achieve above "to guarantee all pages in the page allocator are TDX
> > pages", this series simply choose to reject any non-TDX-usable memory in
> > memory hotplug.
> > 
> > This _will_ be enhanced in the future after first submission.
> 
> What's the primary reason to enhance that? Are there reasonable use 
> cases? Why would be expect to have other (!TDX capable) memory in the 
> system?

Basically Kirill preferred this.  Please see below paragraph in my original
cover letter.

But there has been no consensus on whether we should do it especially with
community.  I probably should not use the word _will_ here (also kinda forgot to
keep this section up to date).

I think I'll just remove this and below paragraph entirely, or I will adjust the
words to say it perhaps is an enhancement we can do in the future.

> 
> > 
> > A better solution, suggested by Kirill, is similar to the per-node memory
> > encryption flag in this series [4].  We can allow adding/onlining non-TDX
> > memory to separate NUMA nodes so that both "TDX-capable" nodes and
> > "TDX-capable" nodes can co-exist.  The new TDX flag can be exposed to
> > userspace via /sysfs so userspace can bind TDX guests to "TDX-capable"
> > nodes via NUMA ABIs.

Also [4] was stripped:

[4]: per-node memory encryption flag
https://lore.kernel.org/linux-mm/20221007155323.ue4cdthkilfy4lbd@xxxxxxxxxxxxxxxxx/t/

> > 
> > 5. Physical Memory Hotplug
> > 
> > Note TDX assumes convertible memory is always physically present during
> > machine's runtime.  A non-buggy BIOS should never support hot-removal of
> > any convertible memory.  This implementation doesn't handle ACPI memory
> > removal but depends on the BIOS to behave correctly.
> 
> -- 
> Thanks,
> 
> David / dhildenb
> 





[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux