dwh invited me to contribute to this discussion and I hope my comments are helpful. He referenced my work on the DIF KERI WG standard. This emerging standard has been adopted by the Global Legal Entity Identifier Foundation (GLEIF) as the basis for its new verifiable LEIs. These are required by many regulator bodies for participating legal entities. https://keri.one https://identity.foundation/working-groups/keri.html https://www.gleif.org/en/lei-solutions/gleifs-digital-strategy-for-the-lei/introducing-the-verifiable-lei-vlei This is part of a much larger effort to fix the security of internet distributed systems in general. The approach is based on the principles of what I like to call zero-trust-computing (ZTC) which is a generalization of the more commonly know zero-trust-networking (ZTN). Zero trust mean never trust always verify where verify is in the cryptographic sense of verifying cryptographic operations such as signatures or digests. ZTN is becoming increasingly popular for access control of networked applications. In contrast, ZTC merges ZTN principles with trusted computing principles to the architecture of any distributed software application. https://trustedcomputinggroup.org https://github.com/WebOfTrustInfo/rwot7-toronto/blob/master/final-documents/A_DID_for_everything.pdf https://github.com/WebOfTrustInfo/rwot10-buenosaires/blob/master/final-documents/quantum-secure-dids.pdf The core idea of zero-trust is end-to-end verifiability of all operations in the system. The type of operation is application dependent. The verifiability is cryptographic. One of the most important (and most relevant to git) types of end-to-end verifiability is authenticity via non-repudiable signatures. A signature is also a hash (digest) so it secures both the integrity of and attribution to the source of that data. In trusted computing one starts with secure roots-of-trust that one may then build the rest of the system upon. In distributed trusted computing the root-of-trust is a verifiable data structure https://www.continusec.com/static/VerifiableDataStructures.pdf https://transparency.dev/verifiable-data-structures/ https://www.bbva.com/en/on-building-a-verifiable-log-part-1-core-ideas/ The point is that a verifiable data structure provides an end-verifiable proof of some state. It becomes a verifiable state machine which means any software application may be made verifiable using verifiable data structures. The verifiable data structure provides a secure root-of-trust that satisfies the end-verifiability principle of zero-trust computing needed for distributed systems. A open end-verifiable system may exhibit ambient verifiability, that is, any copy is verifiable by anyone anywhere at anytime. One of the simplest forms of a verifiable data structure is a hash chained signed append only log such as a provenance log (proposed above @dwh). A variant would be a hash chained signed DAG. The degree of security or cryptographic strength of the log is a function of the cryptographic strength of both the digest and signature operations. Unlike what is popularly portrayed in movies, a crypto system with at least 128 bits of cryptographic strength is practically infeasible to attack by brute force, i.e. are impervious to brute force attack. Instead the attack must be some sort of what is called a side-channel attack usually against one of three targets, key creation and storage infrastructure, data signing infrastructure or signature verification infrastructure. https://github.com/SmithSamuelM/Papers/blob/master/whitepapers/IdentifierTheory_web.pdf For the first two (key creation/storage and data signing) there are many well known techniques such as secure enclaves, TPMs, HSMs, and TEEs as well as using threshold structures like multi-sig that may provide arbitrarily high levels of security. The third side channel attack targets signature verification usually is dependent on using secure code libraries. But the last two, namely, data signing and signature verification infrastructure, require secure code delivery of the code as integrated into the application that consumes it. The result is that when designing zero-trust computing systems based on verifiable data structures, the weakest link is a side channel attack, the weakest link for side channel attacks is often the secure code delivery mechanism, and the weakest link for secure code delivery is often git. What dwh is proposing is converting git from a software application with what the security community would consider antiquated security to a best-of-breed security system based on zero-trust-computing principles. This conversion does not come from imbuing git with its own security system for end-verifiable authenticity but instead layering git on top of a secure end-verifiable authenticity layer outside of git. This layering is enabled by using self-describing cryptographic primitives inside a self-describing verifiable data structure. Self-describing verifiable data structures are to the security world what JSON is to the API world. By using self-describing primates (such as a self-describing hash) in git's data structure, then those become end-verifiable data structures themselves. A signature on a secure digest is a convenient way of making secure attribution to the associated data without signing the data itself. But this requires that the digest be at least as secure as the signature. A secure digest also has the property of post-quantum protection. So a secure digest such as Blake2b, Sha3, and Blake3 digests can be used to protect non-post-quantum proof signature schemes from surprise quantum attack. One of the essential properties of any good cryptographic system is what is called cryptographic algorithm agility. Without it the system cannot easily adapt to new attacks and newly discovered weaknesses in cryptographic algorithms. Self-describing cryptographic primitives are the most convenient enabler for cryptographic agility. One advantage of signed hash chained provenance logs is that the whole log must be compromised not merely one part of it. Such a log that exhibits agility especially through self-describing primitives is self-healing in sense that new appendages to the log may use stronger crypto primitives which protect earlier entries in the log that use weaker primitives. This makes the log (or any such agile self-describing verifiable data structure) future proof. It is the best practice for designing distributed (over the internet) zero trust computing applications. It is my prediction that over the next few years there will be a rapid switchover to the use of zero-trust computing architectures based on self-describing verifiable data structures for distributed internet applications. It is the most elegant, most decentralized, solution to the security problems of distributed internet applications. Because of git's important role in code creation and delivery, it should IMHO be leading out in this space and dwh's proposal does just that. Not fixing git in this way will eventually force work arounds for anyone seriously implementing zero-trust architectures. This will result in non-standard usually proprietary implementations of access control mechanisms in an attempt to fix up the relatively antiquated security of git tooling. This will be bad for everyone as it will balkanize git tooling along proprietary access control mechanisms, (which is already happening). A open interoperable zero-trust future proofed secure git requires that git be secured by a verifiable substrate such as dwh is proposing. Not some antiquated mechanism as is the case today.