Re: [PU PATCH] Fix git fetch for very large ref counts

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Julian Phillips" <jp3@xxxxxxxxxxxxxxxxx> writes:

> The updated git fetch in pu is vastly improved on repositories with very
> large numbers of refs.  The time taken for a no-op fetch over ~9000 refs
> drops from ~48m to ~0.5m.
>
> However, before git fetch will actually run on a repository with ~9000
> refs the calling interface between fetch and fetch--tool needs to be
> changed.  The existing version passes the entire reflist on the command
> line, which means that it is subject to the maxiumum environment size
> passed to a child process by execve.
>
> The following patches add a stdin based interface to fetch--tool allowing
> the ~9000 refs to be passed without exceeding the environment limit.

Thanks.

But the ones in 'pu' were done primarily as demonstration of
where the bottlenecks are, and not meant for real-world
consumption.  I think the final shaving of 0.5m down to a few
seconds needs to move the ls_remote_result string currently kept
as a shell variable to a list of strings represented in a
git-fetch largely rewritten in C, and at that point the
interface from outside fetch--tool to throw 9000 refs at it
would be an internal function call and the code you fixed along
with new function you introduced would probably need to be
discarded.




-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]