Re: [PATCH] tpm: Add driver for TPM over virtio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/22/19 5:34 PM, James Bottomley wrote:
> On Fri, 2019-02-22 at 16:45 -0800, David Tolnay wrote:
> [...]
>> I appreciate the explanation and link, James!
>>
>> I had briefly investigated the existing support in QEMU before
>> pursuing a virtio based driver. At the time, I determined that QEMU
>> implements a register level emulation of a TPM rather than what our
>> team would consider a minimum viable vTPM.
> 
> Actually, no, it doesn't at all.  QEMU implements nothing about a TPM. 
> You have to set up a software TPM outside of qemu which talks over a
> socket and then use the vTPM socket to pass that TPM through to qemu. 
> Effectively QEMU is TPM implementation blind (which is why it can do
> both 1.2 and 2.0) all it provides is discovery of the virtual hardware.

Thanks, this sounds very similar to our use case. We'd like for crosvm
to be TPM implementation blind as well, with the TPM implementation
running in the host and attached via socket or D-Bus to the hypervisor.
The TPM implementation may be purely software or may be a daemon backed
by hardware TPM.

Sounds like there is a lot of overlap.


>>  It implements the TPM-specific TIS interface (QEMU's tpm_tis.c) as
>> well as CRB interface (QEMU's tpm_crb.c) which require Linux's TIS
>> driver (Linux's tpm_tis.c) and CRB driver (Linux's tpm_crb.c)
>> respectively. Both of those are based on ACPI.
> 
> That's right, QEMU implements the device interface emulation, but it
> passes the actual TPM communication packets to the vTPM outside QEMU.

Could you clarify what you mean by a TPM communication packet since I am
less familiar with TPM and QEMU? I don't see "packet" terminology being
used in drivers/char/tpm. Is a packet equivalent to a fully formed TPM
command / response or is it a lower level aspect of the device interface
than that?

More concretely, would you say that a hypervisor necessarily needs to
implement TPM device interface emulation (TIS and/or CRB) in order to
expose a TPM running on the host to its guest OS? I can see QEMU has
those things.


>> As far as I can tell, QEMU does not provide a mode in which the
>> tpm_vtpm_proxy driver would be involved *in the guest*.
> 
> It doesn't need to.  the vTPM proxy can itself do all of that using the
> guest Linux kernel.  There's no hypervisor or host involvement.  This
> is analagous to the vTPM for container use case, except that to get
> both running in a guest you'd use no containment, so the vtpm client
> and server run in the guest together:
> 
> https://www.kernel.org/doc/html/v4.16/security/tpm/tpm_vtpm_proxy.html

I apologize for still not grasping how this would apply. You bring up a
vtpm proxy that runs in the guest Linux kernel with no hypervisor or
host involvement, with the vtpm client and server running in the guest
together. But host involvement is specifically what we want since only
the host is trusted to run the software TPM implementation or interact
with a hardware TPM. I am missing a link in the chain:

- guest userspace makes TPM call (through tpm2-tss or however else);
- guest kernel receives the call in tpm-dev-common / tpm-interface;
- tpm-interface delegates to a tpm-chip implementation (which one?
  vtpm_proxy_tpm_ops?);
- ???
- a host daemon triages and eventually performs the TPM operation.


>> Certainly you could use a vtpm proxy driver *on the host* but would
>> still need some other TPM driver running in the guest for
>> communication with the host, possibly virtio. If this second approach
>> is what you have in mind, let me know but I don't think it is
>> applicable to the Chrome OS use case.
> 
> Actually, the vTPM on-host use case doesn't use the in kernel vtpm
> proxy driver, it uses a plain unix socket.  That's what the original
> website tried to explain: you set up swtpm in socket mode, you point
> the qemu tpm emulation at the socket and you boot up your guest.

Okay. If I understand correctly, the vTPM on-host use case operates
through TIS and/or CRB implemented in QEMU and the tpm_tis / tpm_crb
driver in the guest. Do I have it right?

All of this is what I would like to avoid by using a virtio driver.


>> Clearly it's possible for us to go the QEMU route and implement ACPI
>> (which crosvm does not otherwise need) plus one or both of TIS and
>> CRB in crosvm, but since all we need is for TPM command buffers to
>> leave the VM and TPM response buffers to enter the VM, all of that
>> seems unnecessarily complicated. A virtio driver substantially
>> lowers the barrier to implementing a hypervisor vTPM.
> 
> I don't believe it requires ACPI, that's just one common way of
> enumerating TPMs and it's how the guest finds it.  If you implemented
> the QEMU passthrough in crosvm, you could use whatever mechanism that's
> convenient to you and would cause a TPM driver to bind.  It's the QEMU
> layer that provides the virtual hardware emulation for the device and
> the external vTPM that provides the TPM implementation.  The two are
> completely decoupled.
> 
> Are you saying crosvm has no ability at all to emulate the discovery
> that we use in the kernel to find TPMs?  Is it some type of firecracker
> like think that only supports fully emulated devices?

I am still digesting the rest of your comment, but yes, Firecracker is a
fork of crosvm so they are similar in this regard.

Thanks for your guidance and patience!


>> Separately, I'd be curious whether you share Jason Gunthorpe's
>> opinion stated elsewhere in the thread, or whether you would
>> encourage the virtio TPM driver to be kept private if feasible
>> alternative drivers already exist. Jason's comment:
>>
>>> We already have a xen 'virtioish' TPM driver, so I don't think
>>> there is a good reason to block a virtio driver if someone cares
>>> about it. There are enough good reasons to prefer virtio to other
>>> options, IMHO.
> 
> I've no real opinion on that one until I understand why you went down
> this path instead of using existing implementations.  Right at the
> moment I do get the impression its because you didn't know how the
> existing implementations worked.
> 
> James
> 
> 




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux Kernel]     [Linux Kernel Hardening]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux