On 2022-03-18 at 14:44:53, Ricard Valverde wrote: > Thank you for filling out a Git bug report! > Please answer the following questions to help us understand your issue. > > What did you do before the bug happened? (Steps to reproduce your issue) > Do a git fetch --depth=1 , with no remote changes, having a lot of > remote branches makes the issue more visible > What did you expect to happen? (Expected behavior) > Shallow fetch should be faster or as fast as a full fetch > What happened instead? (Actual behavior) > Shallow fetch is about 10x slower than full fetch, consistently > What's different between what you expected and what actually happened? > I expected the shallow fetch to be faster than a normal fetch, or as > fast. Did not expect it being 10x slower. > Anything else you want to add: > Tested in a repository with ~10000 remote branches, a single Github > hosted remote. > Also tested with a local remote, and Git 2.35.1 version, with > comparable results. The reason you're seeing this is that fetching into an existing shallow remote is extremely expensive on the server side. With a normal fetch, we know that the user only needs objects which exist between the old and new heads, and that they have every object which is reachable from the old heads. However, with a shallow fetch, we can't assume that, since by definition the client side doesn't have all of those objects. A substantial amount more work must be done to determine what the client already has. This is made worse by the fact that you have 10,000 branches. If you're working in a CI system or such, you should use a fresh shallow clone each time, which will be much faster and easier on the server. Otherwise, you may find that reducing the number of refs significantly can also help performance. If you can guarantee that you'll be online when working with this project, a partial clone may also meet your needs. -- brian m. carlson (he/him or they/them) Toronto, Ontario, CA
Attachment:
signature.asc
Description: PGP signature