On 8/2/23 19:13, Jerry Snitselaar wrote:
On Tue, Aug 01, 2023 at 10:09:58PM +0300, Jarkko Sakkinen wrote:
On Tue Aug 1, 2023 at 9:42 PM EEST, Linus Torvalds wrote:
On Tue, 1 Aug 2023 at 11:28, Jarkko Sakkinen <jarkko@xxxxxxxxxx> wrote:
I would disable it inside tpm_crb driver, which is the driver used
for fTPM's: they are identified by MSFT0101 ACPI identifier.
I think the right scope is still AMD because we don't have such
regressions with Intel fTPM.
I'm ok with that.
I.e. I would move the helper I created inside tpm_crb driver, and
a new flag, let's say "TPM_CHIP_FLAG_HWRNG_DISABLED", which tpm_crb
sets before calling tpm_chip_register().
Finally, tpm_add_hwrng() needs the following invariant:
if (chip->flags & TPM_CHIP_FLAG_HWRNG_DISABLED)
return 0;
How does this sound? I can refine this quickly from my first trial.
Sounds fine.
Mario, it would be good if you could send a fix candidate but take my
suggestion for a new TPM chip flag into account, while doing it. Please
send it as a separate patch, not attachment to this thread.
I can test and ack it, if it looks reasonable.
My only worry comes from my ignorance: do these fTPM devices *always*
end up being enumerated through CRB, or do they potentially look
"normal enough" that you can actually end up using them even without
having that CRB driver loaded?
I know that QEMU has TPM passthrough but I don't know how it behaves
exactly.
I just created a passthrough tpm device with a guest which it is using
the tis driver, while the host is using crb (and apparently one of the
amd devices that has an impacted fTPM). It looks like there is a
complete separation between the frontend and backends, with the front
end providing either a tis or crb interface to the guest, and then the
backend sending commands by writing to the passthrough device that was
given, such as /dev/tpm0, or an emulator such as swtpm. Stefan can
probably explain it much better than I.
You explained it well... The passthrough TPM is only good for one VM (if
at all), and all other VMs on the same machine should use a vTPM. Even
one VM sharing the TPM with the host creates a potential mess with the
shared resources of the TPM, such as the state of the PCRs.
When that guest VM using the passthrough device now identifies the underlying
hardware TPM's firmware version it will also take the same action to disable
the TPM as a source for randomness. But then a VM with a passthrough TPM
device should be rather rare...
Put another way: is the CRB driver the _only_ way they are visible, or
could some people hit on this through the TPM TIS interface if they
have CRB disabled?
I'm not aware of such implementations.
CRB and TIS are two distinct MMIO type of interfaces with different registers etc.
AMD could theoretically build a fTPM with a CRB interface and then another one with the same firmware and the TIS, but why would they?
Stefan
I see, for example, that qemu ends up emulating the TIS layer, and it
might end up forwarding the TPM requests to something that is natively
CRB?
But again: I don't know enough about CRB vs TIS, so the above may be a
stupid question.
Linus
I would focus exactly what is known not to work and disable exactly
that.
If someone still wants to enable TPM on such hardware, we can later
on add a kernel command-line flag to enforce hwrng. This ofc based
on user feedback, not something I would add right now.
BR, Jarkko