I've done some searching around the Internet, mailing lists, and reached out in IRC a couple of days ago... and haven't found anyone else asking about a long-brewed contribution idea that I'd finally like to implement. First I wanted to run it by you guys, though, since this is my first time reaching out. Assuming my idea doesn't contradict other best practices or standards already in place, I'd like to transform the typical `git clone` flow from: Cloning into 'linux'... remote: Enumerating objects: 4154, done. remote: Counting objects: 100% (4154/4154), done. remote: Compressing objects: 100% (2535/2535), done. remote: Total 7344127 (delta 2564), reused 2167 (delta 1612), pack-reused 7339973 Receiving objects: 100% (7344127/7344127), 1.22 GiB | 8.51 MiB/s, done. Resolving deltas: 100% (6180880/6180880), done. To subsequent clones (until cache invalidated) using the "flattened cache" version (presumably built while fulfilling the first clone request above): Cloning into 'linux'... Receiving cache: 100% (7344127/7344127), 1.22 GiB | 8.51 MiB/s, done. I've always imagined that this feature would only apply to a "vanilla" clone (that is, one without any flags that change the end result)... but that's only because I've never actually cracked open the `git` codebase yet to validate/invalidated the complexity of this feature. I'm writing in hopes that someone else has thought about it... and might share what they already know. :P Thanks so much for your time! Sincerely, Caleb