Re: [PATCH] tpm: Add driver for TPM over virtio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 2019-02-24 at 14:12 -0800, David Tolnay wrote:
> On 2/24/19 8:30 AM, James Bottomley wrote:
> > QEMU implements a virtual hardware emulation for discovery, but
> > once discovered all the packet communication is handed off to the
> > vTPM socket.
> > 
> > The virtual hardware emulation can be anything we have a driver
> > for. TIS is the simplest, which is why I think they used it.  TIS
> > is actually a simple interface specification, it supports discovery
> > over anything, but the discovery implemented in standard guest
> > drivers is over ACPI, OF and PNP.  If you want more esoteric
> > discovery methods, we also support i2c.  However, that latter is
> > really only for embedded.  I think QEMU chose TIS because it works
> > seamlessly on both Linux and Windows guests.
> > 
> > 
> > > All of this is what I would like to avoid by using a virtio
> > > driver.
> > 
> > How? Discovery is the part that you have to do, whether it's using
> > emulated physical mechanisms or virtual bus discovery.
> 
> What I mean is that we avoid the need for *TPM-specific virtual
> hardware emulation* for discovery by using a virtio driver.
> 
> We avoid implementing TIS or any other TPM-specific interface
> mechanism, and we avoid implementing ACPI or OF or PNP or I2C or any
> other additional bus necessitated by the existing set of Linux TPM
> drivers which we wouldn't otherwise need.
> 
> The virtio driver performs discovery via virtio, which crosvm
> implements already for all of its supported devices. This
> substantially reduces the amount of TPM-specific code compared to
> your suggestions, and lowers the barrier to entry for implementing
> TPM support in other hypervisors which I hope we agree is beneficial.

Well, that's somewhat misleading:  The reason we already have two
hypervisor specific drivers already is because every hypervisor has a
different  virtual discovery mechanism. You didn't find the other two
hypervisor drivers remotely useful, so why would another hypervisor
find yours useful?

> Turning this around a different way, suppose that there already was a
> virtio TPM driver in the kernel.

There already are two paravirt TPM drivers: xen-tpmfront and
tpm_ibmvtpm.

The reason we have so many is that every hypervisor implements a
different virtual bus mechanism.  So if we add this for you all we need
is an ESX driver to have the full complement; or at least for the four
main hypervisors, there are probably a huge number of minor ones, like
the parallels hypervisor, virtual box etc. ... by the time we're done
we'll have ten or so paravirt TPM drivers.

>  For me as a hypervisor implementer, what advantages do you see that
> would make me decide to implement TPM-specific virtual hardware
> emulation in the form of TIS rather than simply leveraging a virtio
> driver like for other virtual devices?

So your argument is that for every device we have in the Linux kernel,
we should have the N hypervisor paravirt variants for the same thing? 
I assure you that's not going to fly because paravirt drivers would
then outnumber real drivers by an order of magnitude.

> > If you want to make this more concrete: I once wrote a pure socsim
> > packet TPM driver:
> > 
> > https://patchwork.ozlabs.org/patch/712465/
> > 
> > Since you just point it at the network socket, it does no discovery
> > at all and works in any Linux environment that has net.  I actually
> > still use it because a socsim TPM is easier to debug from the
> > outside. However it was 230 lines.  Your device is 460 so that
> > means about half your driver is actually about discovery.
> 
> It looks like all the comments in the virtio driver account for the
> difference, not necessarily discovery.
> 
> But to put this in perspective, what we save is the 1000+ lines I see
> in QEMU dedicated to TIS. Without a virtio driver (or socsim, or
> something else that avoids TPM-specific hardware emulation for device
> discovery), QEMU and crosvm and other hypervisors need to reproduce a
> TIS implementation. Conceptually this is simple but it is still a
> quite substantial barrier compared to not needing any TPM-specific
> discovery.

Paravirt drivers are something we add when there's a pragmatic use
case.  Paravirt is not a panacea because it costs us in terms of
additional maintenance burden.  You also still need a receiver in the
hypervisor even for a paravirt driver.  We can argue about the amount
of code you need for the receiver, but without adding some code another
hypervisor can't make use of your paravirt driver.  And, of course, if
they use a different virtual bus implementation, as every hypervisor
seems to, it's quite an enormous amount of code to emulate your bus
implementation.

> > The only reasons I can see to use a virtual bus is either because
> > its way more efficient (the storage/network use case) or because
> > you've stripped down the hypervisor so far that it's incapable of
> > emulating any physical device (the firecracker use case).
> 
> Crosvm does fall under the Firecracker use case, I believe.

Well you just added USB emulation:

https://www.aboutchromebooks.com/news/project-crostini-usb-support-linux-chrome-os/

You didn't tell the kernel USB subsystem to add virtio USB drivers ...

What I've been fishing for for several emails is the pragmatic use case
... do you have one?

James




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux Kernel]     [Linux Kernel Hardening]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux