On Mon, Jul 11, 2022 at 11:03:05PM +0200, Lino Sanfilippo wrote: > > On 11.07.22 04:50, Jarkko Sakkinen wrote: > > On Mon, Jul 04, 2022 at 07:45:12PM +0200, Lino Sanfilippo wrote: > >> > >> > >> On 01.07.22 01:29, Jarkko Sakkinen wrote: > >> > >>> > >>> I'm kind of thinking that should tpm_tis_data have a lock for its > >>> contents? > >> > >> Most of the tpm_tis_data structure elements are set once during init and > >> then never changed but only read. So no need for locking for these. The > >> exceptions I see are > >> > >> - flags > >> - locality_count > >> - locality > > > > I'd still go for single data struct lock, since this lock would > > be taken in every transmit flow. > > Well in both cases, transmit and receive, we end up in wait_for_tmp_stat(). > Whatever lock we hold at this time cannot be taken in the interrupt > handler, since this would deadlock (wait_for_tmp_stat() waits for the interrupt > handler to complete but holds the lock that the interrupt handler needs to proceed). > > So in the interrupt handler we need something that is not held during the whole > transmit/receive flow. > > This is the reason why the locality_count_mutex only protects the one thing we > have to take care of in the interrupt handler, namely the locality counter. > > > > It makes the whole thing easier > > to maintain over time, and does not really affect scalability> > > This brings me to another question: what does this lock protect > > against given that tpm_try_get_ops() already takes tpm_mutex? > > It's not clear and that should be somehow reasoned in the commit > > message. > > See above, we cannot take the tpm mutex in the interrupt handler for the same > reason. You should squash this then with the following patch. Also, I'm not sure why you don't use kref for this. > > Anyway, *if* a lock is needed the granularity should be the whole > > struct. > > > > BR, Jarkko > > Regards, > Lino BR, Jarkko