Re: [PATCH 1/4] tpm_tis: Clean up locality release

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2020-10-01 at 05:01 +0300, Jarkko Sakkinen wrote:
> On Wed, Sep 30, 2020 at 04:03:25PM -0700, James Bottomley wrote:
> > On Wed, 2020-09-30 at 14:19 -0700, Jerry Snitselaar wrote:
> > > James Bottomley @ 2020-09-29 15:32 MST:
> > > 
> > > > The current release locality code seems to be based on the
> > > > misunderstanding that the TPM interrupts when a locality is
> > > > released: it doesn't, only when the locality is acquired.
> > > > 
> > > > Furthermore, there seems to be no point in waiting for the
> > > > locality to be released.  All it does is penalize the last TPM
> > > > user.  However, if there's no next TPM user, this is a
> > > > pointless wait and if there is a next TPM user, they'll pay the
> > > > penalty waiting for the new locality (or possibly not if it's
> > > > the same as the old locality).
> > > > 
> > > > Fix the code by making release_locality as simple write to
> > > > release with no waiting for completion.
> > [...]
> > > My recollection is that this was added because there were some
> > > chips that took so long to release locality that a subsequent
> > > request_locality call was seeing the locality as already active,
> > > moving on, and then the locality was getting released out from
> > > under the user.
> > 
> > Well, I could simply dump the interrupt code, which can never work
> > and we could always poll.
> 
> Side-topic: What is the benefit of using int's in a TPM driver
> anyway? I have never had any interest to dive into this with tpm_crb
> because I don't have the answer.

polling for events that don't immediately happen is a huge waste of
time.  That's why interrupts were invented in the first place.  If you
poll too fast, you consume wakeups which are really expensive to idle
time and if you poll too slowly you wait too long and your throughput
really tanks.  For stuff like disk and network transfers interrupts are
basically essential.  For less high volume stuff, like the TPM, we can
get away with polling, but it's hugely suboptimal if you have a large
number of events to get through ... like updating the IMA log.

> *Perhaps* in some smallest form factor battery run devices you could
> get some gain in run-time power saving but usually in such situations
> you use something similar to TEE to do a measured boot.

It's not about power saving, it's about doing stuff at the right time.

James





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Linux Kernel]     [Linux Kernel Hardening]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux