Hi, I posted an "Is this possible?" question on stackoverflow (https://stackoverflow.com/q/61326025/74296) and was pointed here. I understand from recent updates that there is increasing built-in support for large files and large repos, between some of the older capabilities (sparse checkout in general and shallow clone), and the newer ones (partial-clone and git-sparse-checkout). I'm playing with a large repo, and finding some "rough edges" around large diffs (eg 200,000 files "added" in the "initial" commits of shallow clones). I was hoping these could be smoothed out when using sparse checkout (where each user would only see say 30,000 of those 200,000 files), but can't figure out a way to easily & consistently apply the .git/info/sparse-checkout specification to tools like git-diff and git-log (across many users with some semblance of consistency). Is this something that is or is expected to be supported at some point? While I'm asking, I have two less-important questions: 1) Are there any plans to support a filter along the lines of "keep blobs used for commits since date X handy"? I know I can do a shallow clone, then turn on filtering/promisors, and then unshallow, but then later fetches don't bring in binaries - a mode that provides this "full commit history but recent blobs only" might be nice? (I imagine that's probably non-trivial, because the filters are probably based on properties of the blobs themselves... but one can dream?) 2) Is there a target date for when git-sparse-checkout will become non-experimental? Thanks for any help, my apologies if my questions are too forward. Best regards, Tao Klerks