On Fri, Jun 09, 2023 at 05:42:20PM +0200, Lino Sanfilippo wrote: > Hi, > > On 09.06.23 11:07, Greg KH wrote: > > > > On Wed, May 24, 2023 at 04:07:41AM +0300, Jarkko Sakkinen wrote: > >> On Mon, 2023-05-15 at 17:37 +0200, Felix Riemann wrote: > >>> Hi! > >>> > >>>> [ Upstream commit 15d7aa4e46eba87242a320f39773aa16faddadee ] > >>>> > >>>> In tpm_tis_probe_single_irq() interrupt registers TPM_INT_VECTOR, > >>>> TPM_INT_STATUS and TPM_INT_ENABLE are modified to setup the interrupts. > >>>> Currently these modifications are done without holding a locality thus they > >>>> have no effect. Fix this by claiming the (default) locality before the > >>>> registers are written. > >>>> > >>>> Since now tpm_tis_gen_interrupt() is called with the locality already > >>>> claimed remove locality request and release from this function. > >>> > >>> On systems with SPI-connected TPM and the interrupt still configured > >>> (despite it not working before) this may introduce a kernel crash. > >>> The issue is that it will now trigger an SPI transfer (which will wait) > >>> from the IRQ handler: > >>> > >>> BUG: scheduling while atomic: systemd-journal/272/0x00010001 > >>> Modules linked in: spi_fsl_lpspi > >>> CPU: 0 PID: 272 Comm: systemd-journal Not tainted 5.15.111-06679-g56b9923f2840 #50 > >>> Call trace: > >>> dump_backtrace+0x0/0x1e0 > >>> show_stack+0x18/0x40 > >>> dump_stack_lvl+0x68/0x84 > >>> dump_stack+0x18/0x34 > >>> __schedule_bug+0x54/0x70 > >>> __schedule+0x664/0x760 > >>> schedule+0x88/0x100 > >>> schedule_timeout+0x80/0xf0 > >>> wait_for_completion_timeout+0x80/0x10c > >>> fsl_lpspi_transfer_one+0x25c/0x4ac [spi_fsl_lpspi] > >>> spi_transfer_one_message+0x22c/0x440 > >>> __spi_pump_messages+0x330/0x5b4 > >>> __spi_sync+0x230/0x264 > >>> spi_sync_locked+0x10/0x20 > >>> tpm_tis_spi_transfer+0x1ec/0x250 > >>> tpm_tis_spi_read_bytes+0x14/0x20 > >>> tpm_tis_spi_read32+0x38/0x70 > >>> tis_int_handler+0x48/0x15c > >>> *snip* > >>> > >>> The immediate error is fixable by also picking 0c7e66e5fd ("tpm, tpm_tis: > >>> Request threaded interrupt handler") from the same patchset[1]. > >>> However, as > >>> the driver's IRQ test logic is still faulty it will fail the check and fall > >>> back to the polling behaviour without actually disabling the IRQ in hard- > >>> and software again. For this at least e644b2f498 ("tpm, tpm_tis: Enable > >>> interrupt test") and 0e069265bc ("tpm, tpm_tis: Claim locality in interrupt > >>> handler") are necessary. > >>> > >>> At this point 9 of the set's 14 patches are applied and I am not sure > >>> whether it's better to pick the remaining five patches as well or just > >>> revert the initial six patches. Especially considering there were initially > >>> no plans to submit these patches to stable[2] and the IRQ feature was (at > >>> least on SPI) not working before. > >> > >> I think the right thing to do would be to revert 6 initial patches. > > > > Ok, I think this isn't needed anymore with the latest 5.15.116 release, > > right? If not, please let me know. > > > > > With 0c7e66e5fd ("tpm, tpm_tis: Request threaded interrupt handler") applied the > above bug is fixed in 5.15.y. There is however still the issue that the interrupts may > not be acknowledged properly in the interrupt handler, since the concerning register is written > without the required locality held (Felix mentions this above). > This can be fixed with 0e069265bce5 ("tpm, tpm_tis: Claim locality in interrupt handler"). > > So instead of reverting the initial patches, I suggest to > > 1. also apply 0e069265bce5 ("tpm, tpm_tis: Claim locality in interrupt handler") I've now done this, thanks. greg k-h