Re: [RFC PATCH 0/4] Alternative TPM patches for Trenchboot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/4/24 08:21, James Bottomley wrote:
On Mon, 2024-11-04 at 07:19 -0500, Daniel P. Smith wrote:
On 11/4/24 06:55, 'Ard Biesheuvel' via trenchboot-devel wrote:
[...]
I was referring specifically to the read-write sysfs node that
permits user space to update the default TPM locality. Does it need
to be writable? And does it need to exist at all?

This was my question here, which never got answered as well:

https://lore.kernel.org/linux-integrity/685f3f00ddf88e961e2d861b7c783010774fe19d.camel@xxxxxxxxxxxxxxxxxxxxx/

Right, sorry. As I recall, that was introduce due to the sequence of
how the TPM driver handled locality, moving back to Locality 0 after
done sending cmds. In the Oracle implementation, the initramfs takes
integrity measurements of the environment it is about to kexec into,
eg.  target kernel, initramfs, file system, etc. Some of these
measurements should go into PCR 17 and PCR 18, which requires
Locality 2 to be able extend those PCRs. If the slmodule is able to
set the locality for all PCR extends coming from user space to be
Locality 2, that removes the current need for it.

Well, no, that's counter to the desire to have user space TPM commands
and kernel space TPM commands in different localities.  I thought the
whole point of having locality restricted PCRs is so that only trusted
entities (i.e. those able to access the higher locality) could extend
into them.  If you run every TPM command, regardless of source, in the
trusted locality, that makes the extends accessible to everyone and
thus destroys the trust boundary.

As to Locality switching:
The call sequence is,
  tpm_pcr_extend -> tpm_find_get_ops -> tpm_try_get_ops ->
    tpm_chip_start -> if (chip->locality == -1) tpm_request_locality
And when the extend completes:
  out: tpm_put_ops -> tpm_chip_stop -> tpm_relinquish_locality ->
    chip->locality = -1;

We made slmodule set the locality value used by request/relinquish back to 0 when it was done with its initialization and then the sysfs nodes to allow the runtime to request it when it needed to send measurements. This is because we did not want to pin how it works to the one use case
currently focused on.

By definition I provided earlier, in our use case the initramfs is part of the TCB as it is embedded into the kernel. As to the locality roles, according to TPM Platform Profile:
 - Locality 2: Dynamically Launched OS (Dynamic OS) “runtime” environment.
 - Locality 1: An environment for use by the Dynamic OS.

It also doesn't sound like the above that anything in user space
actually needs this facility.  The measurements of kernel and initramfs
are already done by the boot stub (to PCR9, but that could be changed)
so we could do it all from the trusted entity.

I apologies for not expressing this clearer, as that statement is incorrect. The currently deployed use case works as follows:

[SRTM] --> [GRUB] -- (DLE, terminates SRTM chain) -->
  [CPU] -- (starts DRTM chain) --> [SINIT ACM] -->
  [SL kernel + initramfs] -- (load/measure/kexec) --> [target kernel]

As one can see, the SRTM is terminated and its components are not used in the DRTM chain. This model reproduces the tboot model, with several enhancements, including the ability for a single solution that supports and works on Intel, AMD, and we are currently enabling Arm. It is not the only model that can be used, which several were presented at 2020 Plumbers. A detailed version of a deployed implementation of the secure upgrade use case was detailed in the 2021 FOSSDEM presentation. Where the LCP policy is used to tell the ACM what [SL kernel + initramfs] are allowed to be started by TXT. This allows the ability to launch into an upgrade state without having to reboot.

In case the question comes up from those not familiar, the kexec does an GETSEC[SEXIT] which closes off access to Localities 1 and 2, thus locking the DRTM PCR values. It brings the CPUs out of SMX mode so the target kernel does not require to have any knowledge about running in that mode.

v/r,
dps




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux Kernel]     [Linux Kernel Hardening]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux