On Fri Feb 23, 2024 at 3:57 AM EET, Daniel P. Smith wrote: > On 2/20/24 17:31, Jarkko Sakkinen wrote: > > On Tue Feb 20, 2024 at 10:26 PM UTC, Jarkko Sakkinen wrote: > >> On Tue Feb 20, 2024 at 8:54 PM UTC, Lino Sanfilippo wrote: > >>> for (i = 0; i <= MAX_LOCALITY; i++) > >>> __tpm_tis_relinquish_locality(priv, i); > >> > >> I'm pretty unfamiliar with Intel TXT so asking a dummy question: > >> if Intel TXT uses locality 2 I suppose we should not try to > >> relinquish it, or? > >> > >> AFAIK, we don't have a symbol called MAX_LOCALITY. > > > > OK it was called TPM_MAX_LOCALITY :-) I had the patch set applied > > in one branch but looked up with wrong symbol name. > > > > So I reformalize my question to two parts: > > > > 1. Why does TXT leave locality 2 open in the first place? I did > > not see explanation. Isn't this a bug in TXT? > > It does so because that is what the TCG D-RTM specification requires. > See Section 5.3.4.10 of the TCG D-RTM specification[1], the first > requirement is, "The DLME SHALL receive control with access to Locality 2." >From below also the locality enumeration would be good to have documented (as a reminder). > > > 2. Because localities are not too useful these days given TPM2's > > policy mechanism I cannot recall out of top of my head can > > you have two localities open at same time. So what kind of > > conflict happens when you try to open locality 0 and have > > locality 2 open? > > I would disagree and would call your attention to the TCG's > definition/motivation for localities, Section 3.2 of Client PTP > specification[2]. > > "“Locality” is an assertion to the TPM that a command’s source is > associated with a particular component. Locality can be thought of as a > hardware-based authorization. The TPM is not actually aware of the > nature of the relationship between the locality and the component. The > ability to reset and extend notwithstanding, it is important to note > that, from a PCR “usage” perspective, there is no hierarchical > relationship between different localities. The TPM simply enforces > locality restrictions on TPM assets (such as PCR or SEALed blobs)." > > As stated, from the TPM specification perspective, it is not aware of > this mapping to components and leaves it to the platform to enforce. Yeah, TPM is a passive component, not active actor, in everything. The way I see locality as way to separate e.g. kernel and user space driver TPM transactions is pretty much like actor-dependent salt (e.g. if 0 was for user space and 1 was for kernel). > > "The protection and separation of the localities (and therefore the > association with the associated components) is entirely the > responsibility of the platform components. Platform components, > including the OS, may provide the separation of localities using > protection mechanisms such as virtual memory or paging." > > The x86 manufactures opted to adopt the D-RTM specification which > defines the components as follows: > > Locality 4: Usually associated with the CPU executing microcode. This is > used to establish the Dynamic RTM. > Locality 3: Auxiliary components. Use of this is optional and, if used, > it is implementation dependent. > Locality 2: Dynamically Launched OS (Dynamic OS) “runtime” environment. > Locality 1: An environment for use by the Dynamic OS. > Locality 0: The Static RTM, its chain of trust and its environment. > > And the means to protect and separate those localities are encoded in > the x86 chipset, i.e A D-RTM Event must be used to access any of the > D-RTM Localities (Locality1 - Locality4). > > For Intel, Locality 4 can only be accessed when a dedicated signal > between the CPU and the chipset is raised, thus only allowing the CPU to > utilize Locality 4. The CPU will then close Locality 4, authenticate and > give control to the ACM with access to Locality 3. When the ACM is > complete, it will instruct the chipset to lock Locality 3 and give > control to the DLME (MLE in Intel parlance) with Locality 2 open. It is > up to the DLME, the Linux kernel in this case, to decide how to assign > components to Locality 1 and 2. > > As to proposals to utilize localities by the Linux kernel, the only one > I was aware of was dropped because they couldn't open the higher localities. > > I would also highlight that the D-RTM implementation guide for Arm > allows for a hardware D-RTM event, which the vendor may choose to > implement a hardware/CPU enforced access to TPM localities. Thus, the > ability to support localities will also become a requirement for certain > Arm CPUs. > > [1] > https://trustedcomputinggroup.org/wp-content/uploads/TCG_D-RTM_Architecture_v1-0_Published_06172013.pdf > [2] > https://trustedcomputinggroup.org/wp-content/uploads/PC-Client-Specific-Platform-TPM-Profile-for-TPM-2p0-v1p05p_r14_pub.pdf BR, Jarkko