Hi Alyssa, apologies for the late response.
On 11/12/23 13:07, Alyssa Ross wrote:
+Load-time Integrity
+-------------------
+
+It is critical to understand what load-time integrity establishes about a
+system and what is assumed, i.e. what is being trusted. Load-time integrity is
+when a trusted entity, i.e. an entity with an assumed integrity, takes an
+action to assess an entity being loaded into memory before it is used. A
+variety of mechanisms may be used to conduct the assessment, each with
+different properties. A particular property is whether the mechanism creates an
+evidence of the assessment. Often either cryptographic signature checking or
+hashing are the common assessment operations used.
+
+A signature checking assessment functions by requiring a representation of the
+accepted authorities and uses those representations to assess if the entity has
+been signed by an accepted authority. The benefit to this process is that
+assessment process includes an adjudication of the assessment. The drawbacks
+are that 1) the adjudication is susceptible to tampering by the Trusted
+Computing Base (TCB), 2) there is no evidence to assert that an untampered
+adjudication was completed, and 3) the system must be an active participant in
+the key management infrastructure.
+
+A cryptographic hashing assessment does not adjudicate the assessment but
+instead, generates evidence of the assessment to be adjudicated independently.
+The benefits to this approach is that the assessment may be simple such that it
+may be implemented in an immutable mechanism, e.g. in hardware. Additionally,
+it is possible for the adjudication to be conducted where it cannot be tampered
+with by the TCB. The drawback is that a compromised environment will be allowed
+to execute until an adjudication can be completed.
+
+Ultimately, load-time integrity provides confidence that the correct entity was
+loaded and in the absence of a run-time integrity mechanism assumes, i.e.
+trusts, that the entity will never become corrupted.
I'm somewhat familiar with this area, but not massively (so probably the
sort of person this documentation is aimed at!), and this was the only
section of the documentation I had trouble understanding.
The thing that confused me was that the first time I read this, I was
thinking that a hashing assessment would be comparing the generated hash
to a baked-in known good hash, simliar to how e.g. a verity root hash
might be specified on the kernel command line, baked in to the OS image.
This made me wonder why it wasn't considered to be adjudicated during
assessment. Upon reading it a second time, I now understand that what
it's actually talking about is generating a hash, but not comparing it
automatically against anything, and making it available for external
adjudication somehow.
I don't know if the approach I first thought of is used in early boot
at all, but it might be worth contrasting the cryptographic hashing
assessment described here with it, because I imagine that I'm not going
to be the only reader who's more used to thinking about integrity
slightly later in the boot process where adjudicating based on a static
hash is common, and who's mind is going to go to that when they read
about a "cryptographic hashing assessment".
The scenario that first came to mind for you, specifically the verity
root hash, is in fact a form of signature checking assessment. A
signature is nothing more than saying here is a hash with provenance
that is enforced by the measuring entity. For a PKI signature, e.g. UEFI
Secure Boot, the provenance is confirming that the encrypted portion of
the signature can be decrypted using the CA public key. For the case of
dm-verity, the provenance of the hash is its source, that it came from
the command line. If you consider the consequences presented for a
signature checking assessment, one should see the same issues with
dm-verity: 1) any logic in the kernel, intended or injected, could
tamper with the validation of the hash, 2) there is no evidence of each
block hashed into the final hash that is assessed, and 3) the system is
responsible to ensure only the correct hash has been provided on the
command line.
Another way to consider the above, there are always two actions for
assessing integrity, measurement and assessment. When both actions are
delegated to a single entity along with a mechanism to provide the known
good, this is a signature checking assessment. When these two actions
are delegated to two separate entities, this is a cryptographic hashing
assessment. In TCG parlance, the former is a Root of Trust for
Verification (RTV) chain and the latter is a Root of Trust for
Measurement (RTM) chain.
And to clarify the example provided by Ross in using the TPM seal
method. This is a cryptographic hashing assessment, as the two functions
are done by separate entities. The software makes the measurements while
the TPM makes the assessment. In theory, the solution employing a TPM
seal will have established what the expected sequence of measurements
should be, and ensured the TPM seal was the final and correct measurement.
I don't know if you will find it too rudimentary, but I feel I did a
fairly decent job covering on this in the first ever TrenchBoot talk[1].
[1] https://www.platformsecuritysummit.com/2018/speaker/smith/
The rest of the documentation was easy to understand and very helpful to
understanding system launch integrity. Thanks!
I am very glad to hear you found it helpful. This is a very complex
topic, and trying to break it all down for an audience that may have
zero background and an interest to help is no small undertaking.
V/r,
Daniel P. Smith
_______________________________________________
kexec mailing list
kexec@xxxxxxxxxxxxxxxxxxx
http://lists.infradead.org/mailman/listinfo/kexec