Re: Unpredictable peak memory usage when using `git log` command

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Aug 30, 2024 at 05:06:07PM -0400, Jeff King wrote:
> On Fri, Aug 30, 2024 at 03:20:15PM +0300, Yuri Karnilaev wrote:
> 
> > 2. Processing commits in batches:
> > ```
> > /usr/bin/time -l -h -p git log --ignore-missing --pretty=format:%H%x02%P%x02%aN%x02%aE%x02%at%x00 -n 1000 --skip=1000000 --numstat > 1.txt
> > ```
> > [...]
> > Operating System: Mac OS 14.6.1 (23G93)
> > Git Version: 2.39.3 (Apple Git-146)
> 
> I sent a patch which I think should make things better for you, but I
> wanted to mention two things in a more general way:
> 
>   1. You should really consider building a commit-graph file with "git
>      commit-graph write --reachable". That will reduce the memory usage
>      for this case, but also improve the CPU quite a bit (we won't have
>      to open those million skipped commits to chase their parent
>      pointers).
> 
>      I haven't kept up with the defaults for writing graph files. I
>      thought gc.writeCommitGraph defaults to "true" these days, though
>      that wouldn't help in a freshly cloned repository (arguably we
>      should write the commit graph on clone?).

It does default to true indeed. There is also an option to write commit
graph on fetch via "fetch.writeCommitGraph", but that setting is set to
false by default. To the best of my knowledge there is no option to
generate on a clone, but I agree that it would be sensible to have such
a thing.

Patrick




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux