On Wednesday 16 August 2017 17:15:55, Ken Goldman wrote: > On 8/15/2017 4:13 PM, Haris Okanovic wrote: > > ioread8() operations to TPM MMIO addresses can stall the cpu when > > immediately following a sequence of iowrite*()'s to the same region. > > > > For example, cyclitest measures ~400us latency spikes when a non-RT > > usermode application communicates with an SPI-based TPM chip (Intel Atom > > E3940 system, PREEMPT_RT_FULL kernel). The spikes are caused by a > > stalling ioread8() operation following a sequence of 30+ iowrite8()s to > > the same address. I believe this happens because the write sequence is > > buffered (in cpu or somewhere along the bus), and gets flushed on the > > first LOAD instruction (ioread*()) that follows. > > > > The enclosed change appears to fix this issue: read the TPM chip's > > access register (status code) after every iowrite*() operation to > > amortize the cost of flushing data to chip across multiple instructions. > > I worry a bit about "appears to fix". It seems odd that the TPM device > driver would be the first code to uncover this. Can anyone confirm that > the chipset does indeed have this bug? No, there was already a similar problem in e1000e where a PCIe read stalled the CPU, hence no interrupts are serviced. See https://www.spinics.net/lists/linux-rt-users/msg14077.html AFAIK there was no outcome though. > I'd also like an indication of the performance penalty. We're doing a > lot of work to improve the performance and I worry that "do a read after > every write" will have a performance impact. Realtime will always affect performance, but IMHO the latter is much more important. Best regards, Alexander -- To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html