On Thu, 2024-09-12 at 10:13 -0400, James Bottomley wrote: > On Thu, 2024-09-12 at 15:36 +0200, Roberto Sassu wrote: > > On Thu, 2024-09-12 at 09:26 -0400, James Bottomley wrote: > > > On Thu, 2024-09-12 at 16:16 +0300, Jarkko Sakkinen wrote: > > > > On Wed Sep 11, 2024 at 3:21 PM EEST, James Bottomley wrote: > > > > > On Wed, 2024-09-11 at 10:53 +0200, Roberto Sassu wrote: > > > [...] > > > > > > I made few measurements. I have a Fedora 38 VM with TPM > > > > > > passthrough. > > > > > > > > > > > > Kernels: 6.11-rc2+ (guest), 6.5.0-45-generic (host) > > > > > > > > > > > > QEMU: > > > > > > > > > > > > rc qemu-kvm 1:4.2- > > > > > > 3ubuntu6.27 > > > > > > ii qemu-system-x86 > > > > > > 1:6.2+dfsg- > > > > > > 2ubuntu6.22 > > > > > > > > > > > > > > > > > > TPM2_PT_MANUFACTURER: > > > > > > raw: 0x49465800 > > > > > > value: "IFX" > > > > > > TPM2_PT_VENDOR_STRING_1: > > > > > > raw: 0x534C4239 > > > > > > value: "SLB9" > > > > > > TPM2_PT_VENDOR_STRING_2: > > > > > > raw: 0x36373000 > > > > > > value: "670" > > > > > > > > > > > > > > > > > > No HMAC: > > > > > > > > > > > > # tracer: function_graph > > > > > > # > > > > > > # CPU DURATION FUNCTION CALLS > > > > > > # | | | | | | | > > > > > > 0) | tpm2_pcr_extend() { > > > > > > 0) 1.112 us | tpm_buf_append_hmac_session(); > > > > > > 0) # 6360.029 us | tpm_transmit_cmd(); > > > > > > 0) # 6415.012 us | } > > > > > > > > > > > > > > > > > > HMAC: > > > > > > > > > > > > # tracer: function_graph > > > > > > # > > > > > > # CPU DURATION FUNCTION CALLS > > > > > > # | | | | | | | > > > > > > 1) | tpm2_pcr_extend() { > > > > > > 1) | tpm2_start_auth_session() { > > > > > > 1) * 36976.99 us | tpm_transmit_cmd(); > > > > > > 1) * 84746.51 us | tpm_transmit_cmd(); > > > > > > 1) # 3195.083 us | tpm_transmit_cmd(); > > > > > > 1) @ 126795.1 us | } > > > > > > 1) 2.254 us | tpm_buf_append_hmac_session(); > > > > > > 1) 3.546 us | tpm_buf_fill_hmac_session(); > > > > > > 1) * 24356.46 us | tpm_transmit_cmd(); > > > > > > 1) 3.496 us | tpm_buf_check_hmac_response(); > > > > > > 1) @ 151171.0 us | } > > > > > > > > > > Well, unfortunately, that tells us that it's the TPM itself > > > > > that's > > > > > taking the time processing the security overhead. The ordering > > > > > of > > > > > the commands in tpm2_start_auth_session() shows > > > > > > > > > > 37ms for context restore of null key > > > > > 85ms for start session with encrypted salt > > > > > 3ms to flush null key > > > > > ----- > > > > > 125ms > > > > > > > > > > If we context save the session, we'd likely only bear a single > > > > > 37ms > > > > > cost to restore it (replacing the total 125ms). However, > > > > > there's > > > > > nothing we can do about the extend execution going from 6ms to > > > > > 24ms, so I could halve your current boot time with security > > > > > enabled > > > > > (it's currently 149ms, it would go to 61ms, but it's still 10x > > > > > slower than the unsecured extend at 6ms) > > > > > > > > > > James > > > > > > > > I'll hold for better benchmarks. > > > > > > Well, yes, I'd like to see this for a variety of TPMs. > > > > > > This one clearly shows it's the real time wait for the TPM (since > > > it dwarfs the CPU time calculation there's not much optimization we > > > can do on the kernel end). The one thing that's missing in all of > > > this is what was the TPM? but even if it's an outlier that's > > > really bad at crypto what should we do? We could have a blacklist > > > that turns off the extend hmac (or a whitelist that turns it on), > > > but we can't simply say too bad you need a better TPM. > > > > Ops, sorry. I pasted the TPM properties. Was not that clear: > > > > Infineon Optiga SLB9670 (interpreting the properties). > > OK, that's reasonably modern and common: > > https://www.infineon.com/cms/en/product/security-smart-card-solutions/optiga-embedded-security-solutions/optiga-tpm/ > > I assume it's one of the Q20 (otherwise it would be a TPM 1.2) but what > firmware version (as in could it be upgraded and the tests re-run to > see if that makes a difference). > > I also need the IMA community to start thinking about what they're > willing to accept in terms of performance for the added security hmac > brings to TPM extends. Just for curiosity, I made a comparison of the boot time of Fedora 38 (minimal installation) without and with HMAC enabled, without and with the Integrity Digest Cache [1], which I originally designed exactly for this purpose (one measurement per package): Without HMAC: Without Integrity Digest Cache: [root@fedora ~]# systemd-analyze Startup finished in 2.486s (kernel) + 3.594s (initrd) + 11.613s (userspace) = 17.694s multi-user.target reached after 11.559s in userspace. [root@fedora ~]# cat /sys/kernel/security/ima/ascii_runtime_measurements|wc -l 444 With Integrity Digest Cache: [root@fedora ~]# systemd-analyze Startup finished in 2.381s (kernel) + 3.469s (initrd) + 11.794s (userspace) = 17.644s multi-user.target reached after 11.750s in userspace. [root@fedora ~]# cat /sys/kernel/security/ima/ascii_runtime_measurements|wc -l 218 With HMAC: Without Integrity Digest Cache: [root@fedora ~]# systemd-analyze Startup finished in 2.911s (kernel) + 3.453s (initrd) + 1min 5.754s (userspace) = 1min 12.119s multi-user.target reached after 1min 5.707s in userspace. [root@fedora ~]# cat /sys/kernel/security/ima/ascii_runtime_measurements|wc -l 444 With Integrity Digest Cache: [root@fedora ~]# systemd-analyze Startup finished in 2.990s (kernel) + 3.462s (initrd) + 37.038s (userspace) = 43.491s multi-user.target reached after 36.997s in userspace. [root@fedora ~]# cat /sys/kernel/security/ima/ascii_runtime_measurements|wc -l 218 [1]: https://lore.kernel.org/linux-integrity/20240905150543.3766895-1-roberto.sassu@xxxxxxxxxxxxxxx/ Roberto