On Wed, 3 May 2023 at 02:17, Felipe Contreras <felipe.contreras@xxxxxxxxx> wrote: > > Hi, > > Changing the subject as this message seems like a different topic. > > Jeff King wrote: > > On Wed, Apr 26, 2023 at 02:33:30PM -0700, Junio C Hamano wrote: > > > "brian m. carlson" <sandals@xxxxxxxxxxxxxxxxxxxx> writes: > > > > > > > `GIT_DEFAULT_HASH`:: > > > > If this variable is set, the default hash algorithm for new > > > > repositories will be set to this value. This value is currently > > > > + ignored when cloning if the remote value can be definitively > > > > + determined; the setting of the remote repository is used > > > > + instead. The value is honored if the remote repository's > > > > + algorithm cannot be determined, such as some cases when > > > > + the remote repository is empty. The default is "sha1". > > > > + THIS VARIABLE IS EXPERIMENTAL! See `--object-format` > > > > + in linkgit:git-init[1]. > > > > > > We'd need to evantually cover all the transports (and non-transport > > > like the "--local" optimization) so that the object-format and other > > > choices are communicated from the origin to a new clone anyway, so > > > this extra complexity "until X is fixed, it behaves this way, but > > > otherwise the variable is read in the meantime" may be a disservice > > > to the end users, even though it may make it easier in the shorter > > > term for maintainers of programs that rely on the buggy "git clone" > > > that partially honored this environment variable. > > > > > > In short, I am still not convinced that the above is a good design > > > choice in the longer term. > > > > I also think it is working against the backwards-compatible design of > > the hash function transition. > > To be honest this whole approach seems to be completely flawed to me and > against the whole design of git in the first place. > > In a recent email Linus Torvalds explained why object ids were > calculated based {type, size, data} [1], and he explained very clearly > that two objects with exactly the same data are not supposed to have the > same id if the type is different. He said: --- quote-begin --- The "no aliasing" means that no two distinct pointers can point to the same data. So a tagged pointer of type "commit" can not point to the same object as a tagged pointer of type "blob". They are distinct pointers, even if (maybe) the commit object encoding ends up then being identical to a blob object. --- quote-end --- As far as I could tell he didn't really explain *why* he wanted this, and IMO it is non-obvious why he would care if a blob and a commit had the same text, and thus the same ID. He just said he didnt want it to happen, not why. I can imagine some aesthetic reasons why you might want to ensure that no blob has the same ID as a commit, and I can imagine it might make debugging easier at certain points, but it seems unnecessary given the data is write once. > If even the tiniest change such as adding a period to a commit messange > changes the object id (and thus semantically makes it a different > object), then it makes sense that changing the type of an object also > changes the object id (and thus it's also a different object). > > And because the id of the parent is included in the content of every > commit, the top-level id ensures the integrity of the whole graph. > > But then comes this notion that the hash algorithm is a property of the > repository, and not part of the object storage, which means changing the > whole hash algorithm of a repository is considered less of a change than > adding a period to the commit message, worse: not a change at all. I really dont understand why you think having two hash functions producing different results for the same data is comparable to a single hash producing different results for different data. In one case you have two different continuum of identifiers, with one ID per continuum, and in the other you have two different identifiers in the same continuum, and if you a continuum you would have 4 different identifiers right? Eg, the two cases are really quite different at a fundamental level. > I am reminded of the warning Sam Smith gave to the Git project [2] which > seemed to be unheard, but the notion of cryptographic algorithm agility > makes complete sense to me. > > In my view one repository should be able to have part SHA-1 history, > part SHA3-256 history, and part BLAKE2b history. Isn't this orthagonal to your other points? > Changing the hash algorithm of one commit should change the object id of > that commit, and thus make it semantically a different commit. > > In other words: an object of type "blob" should never be confused with > an object of type "blob:sha-256", even if the content is exactly the > same. This doesn't make sense to me. As long as we can distinguish the hashes produced by the different hash functions in use we can create a mapping of the data that is hashed such that we have a 1:1 mapping of identifiers of each type at which point it really doesn't matter which hash function is used. > The fact that apparently it's so easy to clone a repository with > the wrong hash algorithm should give developers pause, as it means the > whole point of using cryptographic hash algorithms to ensure the > integrity of the commit history is completely gone. This is a leap too far. The fact that it is "so easy to clone a repo with the wrong hash algorithm" is completely orthogonal to the fundamental principles of hash identifiers from strong hash functions. You seem to be deriving grand conclusions from what sounds to me like a simple bug/design-oversight. > I have not been following the SHA-1 -> OID discussions, but I > distinctively recall Linus Torvalds mentioning that the choice of using > SHA-1 wasn't even for security purposes, it was to ensure integrity. > When I do a `git fetch` as long as the new commits have the same SHA-1 > as parent as the SHA-1s I have in my repository I can be relatively > certain the repository has not been tampered with. Which means that if I > do a `git fetch` that suddenly brings SHA-256 commits, some of them must > have SHA-1 parents that match the ones I currently have. Otherwise how > do I know it's the same history? So consider what /could/ happen here. You fetch a commit which uses SHA-256 into a repo where all of your local commits use SHA-1. The commit you fetched says its parent is some SHA-256 ID you don't know about as all your ID's are SHA-1. So git then could go and construct an index, hashing each item using SHA-256 instead of SHA-1, and using the result to build a bi-directional mapping from SHA-1 to SHA-256 and back. All it has to do then is look into the mapping to find if the SHA-256 parent id is present in your repo. If it is then you know it's the same history. The key point here is that if you ignore SHAttered artifacts (which seems reasonable as you can detect the attack during hashing) you can build a 1:1 map of SHA-1 and SHA-256 ids. Once you have that mapping it doesn't matter which ID is used. > Maybe that's one of the reasons people don't seem particularly eager to > move away from SHA-1: Maybe, but it doesn't make sense to me. You seem to be putting undue weight on an unnecessary aspect of the git design: there doesn't seem to be a reason for Linuses "no aliasing" policy, and it seems like one could build a git-a-like without it without suffering any significant penalties. Regardless, provided that the hash functions allow a 1:1 mapping of ID's (which is assumed by using "collision free hash functions"), it seems like it really doesn't matter which hash is used at any given time. cheers, Yves -- perl -Mre=debug -e "/just|another|perl|hacker/"