Junio C Hamano <gitster@xxxxxxxxx> 于2022年8月9日周二 00:15写道: > > "ZheNing Hu via GitGitGadget" <gitgitgadget@xxxxxxxxx> writes: > > > From: ZheNing Hu <adlternative@xxxxxxxxx> > > > > Although we already had a `--filter=sparse:oid=<oid>` which > > can used to clone a repository with limited objects which meet > > filter rules in the file corresponding to the <oid> on the git > > server. But it can only read filter rules which have been record > > in the git server before. > > Was the reason why we have "we limit to an object we already have" > restriction because we didn't want to blindly use a piece of > uncontrolled arbigrary end-user data here? Just wondering. > * An end-user's maybe doesn't even have write access to the repository, so they can't config a filterspec file before git clone, what should they do now? * If there are thousands of different developers use the same git repo, and they use "--filter=sparse:oid" to do different partial-clone, then how many filterspec file should repo managers config first? * Why not carefully check "uncontrolled arbigrary end-user data" here, such as add a config like "partialclone.sparsebufferlimit" to limit transport data size, or check if filterspec file is legal? Or if git server don't trust its user... we can use a config to ban this filter, And within some companies, users can basically be trusted. * I'm sure it would be beneficial to let the filtering rules be configured by the user, because now many people have such needs: download only a few of files of directories of the repository. * sparse-checkout + partial-clone is a good reference: we have a ".git/info/sparse-checkout" for record what we actually want to checkout to work-tree, and it will fetch some missing git objects which record in ".git/info/sparse-checkout" from git server. I know it use <oid> to fetch objects one by one instead of "path"... But In hindsight, its performance is extraordinarily bad as a result... Anyway, this patch represents some of my complaints about the current partial-clone feature and I hope the community will move forward with it. Thanks. ZheNing Hu