On Mon, Jul 11, 2022 at 3:37 PM Jason Andryuk <jandryuk@xxxxxxxxx> wrote: > > On Sun, Jul 10, 2022 at 10:58 PM Jarkko Sakkinen <jarkko@xxxxxxxxxx> wrote: > > > > Can you test against > > > > https://lore.kernel.org/linux-integrity/20220629232653.1306735-1-LinoSanfilippo@xxxxxx/T/#t > > I applied on top of 5.15.53, and the probe on boot still fails. > Manually probing works intermittently. On top of Lino Sanfilippo's patch queue, I added an additional tpm_tis_request_locality and tpm_tis_release_locality to hold the locality open via refcount in tpm_tis_chip_init. Similar to my patch in this thread, it acquires the locality before the TPM_INT_ENABLE write and holds it until after the TPM_RID read. That fixes the probe issue on the box where I tested. While tpm_tis_core_init is definitely a problem, I wonder if there are other code paths that could trigger this acquire -> release -> acquire issue. In that light, restoring a wait by reverting commit e42acf104d6e ("tpm_tis: Clean up locality release") seems safer since it would cover any code path. I just tested reverting that and it still fails to probe on boot and intermittently. I'm surprised since I expected it to solve the issue, but my original debugging was showing TPM_ACCESS_ACTIVE_LOCALITY cleared (and never re-set) so the driver isn't actually waiting. All this locality acquiring and releasing seems rather redundant. If locality 0 is all that is ever used, why not just hold it open? I guess Trench Boot/Secure Launch wants to use locality 2 (https://lore.kernel.org/lkml/1645070085-14255-13-git-send-email-ross.philipson@xxxxxxxxxx/), but in either case doesn't the system just stay in the assigned locality? I haven't read the spec, so maybe that is disallowed. There is something nice about cleaning up and releasing the locality when not in use, but it's also causing a problem here. Regards, Jason