Re: [PATCH] tpm: Add driver for TPM over virtio

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2/22/19 2:24 PM, Michael S. Tsirkin wrote:
> On Fri, Feb 22, 2019 at 01:40:25PM -0800, David Tolnay wrote:
>> On 2/21/19 9:51 PM, Michael S. Tsirkin wrote:
>>>
>>> Maintainer entry?
>>
>> Good call, I will add one.
>>
>> Could you let me know what workflow would work best for you? I can direct
>> patches toward Chrome OS's kernel tree and only a Chrome OS list, then send them
>> your way only once they have been reviewed and accepted there. Or I can list one
>> or both of the linux-integrity@v.k.o and virtualization@l.l-f.o lists along with
>> a list for Chrome OS reviewers.
>>
>> Either way, we'll want eyes from people who know virtio and from people who know
>> TPM on most changes.
> 
> Well if you are changing a host/guest interface then you also need to copy
> virtio-dev. That one is subscriber only so that would
> imply sending there after it's reviewed in chrome.
> As an extra bonus reviewed code is hopefully higher quality
> so less review work for me ;)
> so the first option sounds like a better plan.
> 
> But I hope accepted above just means it goes on some branch,
> such that we won't end up with production code that
> is set in stone and needs to be maintained forever?

Understood. We will avoid considering anything canonical before
upstream has a chance to weigh in. Thanks for the caution.


>>>> + *
>>>> + * The intended hypervisor-side implementation is as follows.
>>>> + *
>>>> + *     while true:
>>>> + *         await next virtio buffer.
>>>> + *         expect first descriptor in chain to be guest-to-host.
>>>> + *         read tpm command from that buffer.
>>>> + *         synchronously perform TPM work determined by the command.
>>>> + *         expect second descriptor in chain to be host-to-guest.
>>>> + *         write TPM response into that buffer.
>>>> + *         place buffer on virtio used queue indicating how many bytes written.
>>>
>>> That's fine I think except generally it should be legal for guest
>>> to split each buffer to several segments.
>>
>> Acknowledged. I will adjust this comment.
>>
>> To clarify, this means the hypervisor will need to accept a single descriptor
>> chain consisting of one or more guest-to-host descriptors which together form
>> the command, followed by one or more host-to-guest descriptors into which the
>> response may be written. Is that what you had in mind?
> 
> Exactly.
> 
>> TPM commands and responses are both capped at a moderate size (4096 bytes). With
>> that in mind, is it still necessary to allow split buffers? We won't be dealing
>> with multi-megabyte transfers.
> 
> This has been a general virtio rule, yes. See for example "2.6.4 Message
> Framing" in v1.1 draft.
> 
> It's generally not hard to implement and generally easier than argue about
> whether it's necessary. If you think it's a big problem, it's a good
> idea to take it up with the virtio tc. Address this to virtio-comment
> list.

I appreciate the explanation. As you said, it is easy to implement
so I'll fix this comment to allow for buffers split into segments. I
didn't know about this general virtio rule but it makes sense to me!


>>>> +/*
>>>> + * Timeout duration when waiting on the hypervisor to complete its end of the
>>>> + * TPM operation. This timeout is relatively high because certain TPM operations
>>>> + * can take dozens of seconds.
>>>> + */
>>>> +#define TPM_VIRTIO_TIMEOUT (120 * HZ)
>>>
>>> Should we read this from device? E.g. a busy hypervisor might
>>> take longer to respond ...
>>
>> That's true. Do you have an example of an existing virtio driver that reads
>> timeout from the hypervisor?
>>
>> To understand your perspective, do you see one of the following as being the
>> bigger advantage? -- either that we can use a timeout tighter than 120s for
>> hypervisors that believe they can respond quickly to any command, or that we can
>> relax the timeout beyond 120s for an especially overburdened hypervisor. The
>> first is nice because the caller in the guest will receive their error code
>> sooner in certain failure modes, but maybe that would be better addressed by
>> some other way of poking the host to find out whether it is still alive and
>> working. For the second, 120s is pretty generous and in our use case we would
>> just want to give up and retry much later at that point; I don't know how much
>> further it would be reasonable to grow a timeout.
> 
> I think the whole idea around timeout handling needs a bit more
> thought. What kind of reasons for the timeout do you envision
> that require the extra kicks?

The hesitation I had with responding to timeouts more aggressively
than just by kicking again, was that we don't want to end up in a
situation where the guest driver performs some kind of reset while
the hypervisor is still reading or writing one of the driver's
buffers. If timeout triggers a full remove and probe, including
kfree on the vtpm_device containing the buffer, that seems likely to
lead to memory corruption in the guest.

As you commented elsewhere, it sounds like NEEDS_RESET is the way to
go. If the guest driver times out, and observes that NEEDS_RESET bit
is set, then we can assume the hypervisor is not continuing to write
memory so a reset is safe.

Do you have guidance for how to safely handle a timeout in which
NEEDS_RESET has not been set?



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux Kernel]     [Linux Kernel Hardening]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux