Hi list! I'm getting bad performance on 'git status' when I have staged many changes to big files. For example, consider this: $ git init Initialized empty Git repository in $HOME/test/.git/ $ for X in $(seq 100); do dd if=/dev/zero of=$X bs=1M count=1 2> /dev/null; done $ git add . $ git commit -m 'Lots of zeroes' Created initial commit ed54346: Lots of zeroes 100 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 1 create mode 100644 10 ... create mode 100644 98 create mode 100644 99 $ for X in $(seq 100); do echo > $X; done $ time git status # On branch master # Changed but not updated: # (use "git add <file>..." to update what will be committed) # # modified: 1 # modified: 10 ... # modified: 98 # modified: 99 # no changes added to commit (use "git add" and/or "git commit -a") real 0m0.003s user 0m0.001s sys 0m0.002s $ git add -u $ time git status # On branch master # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: 1 # modified: 10 ... # modified: 98 # modified: 99 # real 0m16.291s user 0m16.054s sys 0m0.221s The first 'git status' shows the same difference as the second, just the second time it's staged instead of unstaged. Why does it take 16 seconds the second time when it's instant the first time? (Side note: There once was a discussion about adding natural order of branch names, but seems it never made it into git. The same would make sense for 'git status' too.) Cheers, jlh -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html