Re: Bad git status performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Michael J Gruber wrote:
> Experimenting further: Using 10 files with 10MB each (rather than 100
> times 1MB) brings down the time by a factor 10 roughly - and so does
> using 100 files with 100k each. Huh? Latter may be expected (10MB
> total), but former (100MB total)?

100 files at each 100k gives me 1.73s, so about 10x speed up.  So
it seems git indeed looks at the content of the files and having a
tenth of the content means it's ten times as fast.

Interestingly, using only a single file of 100MB gives me 0.6s.
Which is still very slow for the job of telling that a 100MB file
is not equal to a 1 byte file.  And certainly there's no renaming
going on with a single file.

> Now it's getting funny: Changing your "echo >" to "echo ">>" (in your
> 100 files 1MB case) makes things "almost fast" again (1.3s).

Same here and that's pretty interesting, because in this situation
I can understand the slow down: Comparing two 1MB files that
differ only at their ends is expected to take some time, as you
have to go through the entire file until you notice they're not
the same.

jlh
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux