Re: [PATCH] tpm: Add driver for TPM over virtio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2019-02-22 at 16:45 -0800, David Tolnay wrote:
[...]
> I appreciate the explanation and link, James!
> 
> I had briefly investigated the existing support in QEMU before
> pursuing a virtio based driver. At the time, I determined that QEMU
> implements a register level emulation of a TPM rather than what our
> team would consider a minimum viable vTPM.

Actually, no, it doesn't at all.  QEMU implements nothing about a TPM. 
You have to set up a software TPM outside of qemu which talks over a
socket and then use the vTPM socket to pass that TPM through to qemu. 
Effectively QEMU is TPM implementation blind (which is why it can do
both 1.2 and 2.0) all it provides is discovery of the virtual hardware.

>  It implements the TPM-specific TIS interface (QEMU's tpm_tis.c) as
> well as CRB interface (QEMU's tpm_crb.c) which require Linux's TIS
> driver (Linux's tpm_tis.c) and CRB driver (Linux's tpm_crb.c)
> respectively. Both of those are based on ACPI.

That's right, QEMU implements the device interface emulation, but it
passes the actual TPM communication packets to the vTPM outside QEMU.

> As far as I can tell, QEMU does not provide a mode in which the
> tpm_vtpm_proxy driver would be involved *in the guest*.

It doesn't need to.  the vTPM proxy can itself do all of that using the
guest Linux kernel.  There's no hypervisor or host involvement.  This
is analagous to the vTPM for container use case, except that to get
both running in a guest you'd use no containment, so the vtpm client
and server run in the guest together:

https://www.kernel.org/doc/html/v4.16/security/tpm/tpm_vtpm_proxy.html

>  Certainly you could use a vtpm proxy driver *on the host* but would
> still need some other TPM driver running in the guest for
> communication with the host, possibly virtio. If this second approach
> is what you have in mind, let me know but I don't think it is
> applicable to the Chrome OS use case.

Actually, the vTPM on-host use case doesn't use the in kernel vtpm
proxy driver, it uses a plain unix socket.  That's what the original
website tried to explain: you set up swtpm in socket mode, you point
the qemu tpm emulation at the socket and you boot up your guest.

> Clearly it's possible for us to go the QEMU route and implement ACPI
> (which crosvm does not otherwise need) plus one or both of TIS and
> CRB in crosvm, but since all we need is for TPM command buffers to
> leave the VM and TPM response buffers to enter the VM, all of that
> seems unnecessarily complicated. A virtio driver substantially
> lowers the barrier to implementing a hypervisor vTPM.

I don't believe it requires ACPI, that's just one common way of
enumerating TPMs and it's how the guest finds it.  If you implemented
the QEMU passthrough in crosvm, you could use whatever mechanism that's
convenient to you and would cause a TPM driver to bind.  It's the QEMU
layer that provides the virtual hardware emulation for the device and
the external vTPM that provides the TPM implementation.  The two are
completely decoupled.

Are you saying crosvm has no ability at all to emulate the discovery
that we use in the kernel to find TPMs?  Is it some type of firecracker
like think that only supports fully emulated devices?

> Separately, I'd be curious whether you share Jason Gunthorpe's
> opinion stated elsewhere in the thread, or whether you would
> encourage the virtio TPM driver to be kept private if feasible
> alternative drivers already exist. Jason's comment:
> 
> > We already have a xen 'virtioish' TPM driver, so I don't think
> > there is a good reason to block a virtio driver if someone cares
> > about it. There are enough good reasons to prefer virtio to other
> > options, IMHO.

I've no real opinion on that one until I understand why you went down
this path instead of using existing implementations.  Right at the
moment I do get the impression its because you didn't know how the
existing implementations worked.

James




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux Kernel]     [Linux Kernel Hardening]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux