On 2/21/24 14:43, Jarkko Sakkinen wrote:
On Wed Feb 21, 2024 at 12:37 PM UTC, James Bottomley wrote:
On Tue, 2024-02-20 at 22:31 +0000, Jarkko Sakkinen wrote:
2. Because localities are not too useful these days given TPM2's
policy mechanism
Localitites are useful to the TPM2 policy mechanism. When we get key
policy in the kernel it will give us a way to create TPM wrapped keys
that can only be unwrapped in the kernel if we run the kernel in a
different locality from userspace (I already have demo patches doing
this).
Let's keep this discussion in scope, please.
Removing useless code using registers that you might have some actually
useful use is not wrong thing to do. It is better to look at things from
clean slate when the time comes.
I cannot recall out of top of my head can
you have two localities open at same time.
I think there's a misunderstanding about what localities are: they're
effectively an additional platform supplied tag to a command. Each
command can therefore have one and only one locality. The TPM doesn't
Actually this was not unclear at all. I even read the chapters from
Ariel Segall's yesterday as a refresher.
I was merely asking that if TPM_ACCESS_X is not properly cleared and you
se TPM_ACCESS_Y where Y < X how does the hardware react as the bug
report is pretty open ended and not very clear of the steps leading to
unwanted results.
With a quick check from [1] could not spot the conflict reaction but
it is probably there.
The expected behavior is explained in the Informative Comment of section
6.5.2.4 of the Client PTP spec[1]:
"The purpose of this register is to allow the processes operating at the
various localities to share the TPM. The basic notion is that any
locality can request access to the TPM by setting the
TPM_ACCESS_x.requestUse field using its assigned TPM_ACCESS_x register
address. If there is no currently set locality, the TPM sets current
locality to the requesting one and allows operations only from that
locality. If the TPM is currently at another locality, the TPM keeps the
request pending until the currently executing locality frees the TPM.
Software relinquishes the TPM’s locality by writing a 1 to the
TPM_ACCESS_x.activeLocality field. Upon release, the TPM honors the
highest locality request pending. If there is no pending request, the
TPM enters the “free” state."
submission). I think the locality request/relinquish was modelled
after some other HW, but I don't know what.
My wild guess: first implementation was made when TPM's became available
and there was no analytical thinking other than getting something that
runs :-)
Actually, no that is not how it was done. IIRC, localities were designed
in conjunction with D-RTM when Intel and MS started the LeGrande effort
back in 2000. It was then generalized for the TPM 1.1b specification. My
first introduction to LeGrande/TXT wasn't until 2005 as part of an early
access program. So most of my historical understanding is from
discussions I luckily got to have with one of the architects and a few
of the original TCG committee members.
[1]
https://trustedcomputinggroup.org/wp-content/uploads/PC-Client-Specific-Platform-TPM-Profile-for-TPM-2p0-v1p05p_r14_pub.pdf
v/r,
dps