Jeff King <peff@xxxxxxxx> writes: > I think we really want to avoid doing that normalization ourselves if we > can. There are just too many filesystem-specific rules. Exactly; not having to learn these rules is the major (if not whole) point of the "let checkout notice the collision and then deal with it" approach. Let's not forget that. > If we have an equivalence-class hashmap and feed it inodes (or again, > some system equivalent) as the keys, we should get buckets of > collisions. I guess one way to get "some system equivalent" that can be used as the last resort, when there absolutely is no inum equivalent, is to rehash the working tree file that shouldn't be there when we detect a collision. If we found that there is something when we tried to write out "Foo.txt", if we open "Foo.txt" on the working tree and hash-object it, we should find the matching blob somewhere in the index _before_ "Foo.txt". On a case-insensitive filesytem, it may well be "foo.txt", but we do not even have to know "foo.txt" and "Foo.txt" only differ in case. Of course, that's really the last resort, as it would be costly, but this is something that only need to happen on the "unusual" case in the error codepath, so...