Hi guys, I'm looking for ways to improve fetch/pull/clone time for large git (mono)repositories with unrelated source trees (that span across multiple services). I've found sparse checkout approach appealing and helpful for most of client-side operations (e.g. status, reset, commit, etc.) The problem is that there is no feature like sparse fetch/pull in git, this means that ALL objects in unrelated trees are always fetched. It may take a lot of time for large repositories and results in some practical scalability limits for git. This forced some large companies like Facebook and Google to move to Mercurial as they were unable to improve client-side experience with git while Microsoft has developed GVFS, which seems to be a step back to CVCS world. I want to get a feedback (from more experienced git users than I am) on what it would take to implement sparse fetching/pulling. (Downloading only objects related to the sparse-checkout list) Are there any issues with missing hashes? Are there any fundamental problems why it can't be done? Can we get away with only client-side changes or would it require special features on the server side? If we had such a feature then all we would need on top is a separate tool that builds the right "sparse" scope for the workspace based on paths that developer wants to work on. In the world where more and more companies are moving towards large monorepos this improvement would provide a good way of scaling git to meet this demand. PS. Please don't advice to split things up, as there are some good reasons why many companies decide to keep their code in the monorepo, which you can easily find online. So let's keep that part out the scope. -Vitaly