On Wed, Nov 21, 2018 at 02:24:18AM +0000, Jeremy Boone wrote: > I think it’s worth recognizing that TPMs are used in a variety of > deployments, each with their own unique threat model and attack > surface. > > For example, some users may care about evil maid scenarios. Heck, > TPM-TOTP (and dare I mention the Qubes Anti-Evil Maid technology) > utilizes the TPM to attest the boot state to the device owner. > > Other users may care about the “lost in the back of a taxi” scenario > wherein the attacker may have extended physical access to the mobile > device (a phone or laptop) before returning it to the owner. > > In other scenarios, the device user may be a different entity than > the device owner, and as such, different security considerations > must be applied. Think of a set top box that you’ve rented from your > cable service provider which uses a TPM to remotely attest the > firmware before being trusted to handle content decryption keys. Or > a car share program that uses the TPM as a means to store temporary > keyless-entry tokens — After all, the TCG Automotive Thin Profile is > taking off, as are the SAE J3101 requirements which suggest the use > of TPM in automotive applications. > An interposer, or even a simple > sniffer attached to test points on the bus, would be able to observe > any secrets transmitted between the TPM and host. Not quite, it is not 'any' secrets. We can and absolutely should competently encrypt various things, like shared secrets for unlocking keys private data to be seal/unsealed, etc. There are robust working mechanisms for this already in the spec. I think this is very important and I know I've always coded my TPM implementations to make use of these features. But PCR extend is not private data, the data is well known. > I believe that the Linux kernel has an obligation to build in active > defences that protect TPM users against serial bus attacks, and > makes no blind assumptions about the ways in which a TPM may be used > or deployed in a variety of creative or unexpected ways. > > This is especially true in light of the fact that the TCG (and TPM > chip manufacturers as well) have not plainly documented that, > despite having expended considerable effort defending against > invasive silicon attacks (see Chris Tarnovsky’s work), a trivial > interposer can still defeat TPM security. I believe that many do not > understand this fact, and conflate the idea that measured boot can > detect “hardware tampering” vs. mere “firmware > tampering”. This is basically my concern. HMACing the PCRs does not magically allow measured boot to detect "hardware tampering". Even if all layers from the BIOS down do this correctly. Pretending otherwise continues to push the incorrect message that PCRs do anything beyond detecting "firmware tampering", as you say. > Regardless, it seems odd to me that we wish to defend against > one-off attacks involving an electron microscope, but do not wish to > defend against a simple microcontroller acting as a > man-in-the-middle on the bus. I agree with this, but if you want to defent against hardware tampering then one should defend robustly against all hardware tampering and call that a TPM capability. A half implemented hardware tampering defence only gives a false sense of security. I think it is possible to do, but it requires some updates to the TPM specification. A general proposal would go something like... 1) The TPM gains a new NV flag 'Secure Reset Required' and a new command 'Secure Reset' 2) Issuing the'Secure Reset' command requires the TPM and CPU to both prove to each other they are authentic, using some crypto protocol (lots of options here) 2a) The TPM will store it's private secret for #2 either as a loaded blob against a NV key or in NV itself. There is some protocol during owning that allows the BIOS to initialize this stuff and set Secure Reset Required. Maybe we assume the HW is untampered during owning. 2b) The CPU will store its private secret for #2 encrypted using modern CPU encryption technology like Intel's SGX, or similar. The encrypted key will be stored in BIOS flash. 3) Upon boot the CPU will securely decrypt it's secret and issue 'Secure Reset'. The TPM will not function until this command is issued. The only other option is a complete wipe. 4) During the 'Secure Reset' crypto the two sides will exchange trusted or secret information used to authenticate and encrypt all future communications 5) The BIOS will pass the #4 data down to the bootloader and to the kernel for use when executign TPM commands, or maybe the BIOS will transparently link it into the ACPI CRB executor or something. 6) TPM will reject any unathenticated&encrypted packet after Secure Reset Now we have properly defended against a wide range of HW tampering. You can't snoop/hijack the TPM bus. You can't reset the TPM. You can't hack the BIOS FLASH and take over the BIOS before TPM reset (assuming SGX is implemented properly). You can't replace the trusted CPU with a hostile CPU. You can't desolder the TPM. This is an excellent improvement in HW defense. If the SGX code also sets up DRAM encryption then we are getting to be really properly secured against HW tampering. This is sort of what I mean when I say a spec update is needed. The spec needs to be designed to properly detect and combat HW tampering. We can't add this new feature robustly with only the functional elements already in the spec. [Of course this is kind of a silly thing to do, because if you have this CPU technology then you may was well just implement the TPM in the CPU.. But this is more of a thought experiment as to what would be needed to secure a discrete TPM against HW tampering] > It’s true that with sufficient time and motivation, a dedicated and > well-funded adversary can defeat almost any protection > mechanism. But our job as defenders is to raise the bar so that > cheap and inexpensive attacks are no longer feasible. By raising the > cost of exploitation beyond the adversary’s appetite, we eliminate > entire classes of attack. My concern is we haven't done this simply because the reset-line attack is less cost and complexity than the interposer on the data bus attack, so we must prevent it first and foremost. Then prevent data bus mangling, then prevent loading modified BIOS's, then prevent hacked CPUs, and so on until things become properly expensive. > Choosing to do nothing simply because other attack avenues exist is > a little too defeatist of an attitude for me. Especially given that > the TPM specification does support payload encryption and integrity > protection through the use of Authorization Sessions. We absolutely should be using encrypted and authenticated sessions when transporting any secret data. > So we do have the necessary tools to begin to solve this > problem. Unfortunately, it is also true that this issue extends > beyond the kernel. We also need to land similar patches for every > stage of the boot process that performs a PCR Extend > operation. Otherwise the chain of trust can be broken before the > kernel is even started. Right, but this chain of trust starts at the reset line, not at the BIOS. There is language in the spec requiring the platform to control the reset line along with the CPU reset - this is critically necessary to make PCRs work for 'measured boot'. In my mind an interposer also means hostile control over the reset line, so reset protection must be part of any complete defence against an interposer. The idea James had with the null key to detect reset doesn't mitigate the case where hostile code is running on the CPU along with an interposer. We can't make the assumption that only trusted code is running in the CPU - if we could assume that we wouldn't need PCRs and measured boot in the first place. ;) Jason