Thanks for letting me know and for the corrections too.
Cheers,
tamas
On 7/29/19 9:50 PM, Jeff King wrote:
On Mon, Jul 29, 2019 at 04:19:47PM +0200, Tamas Papp wrote:
Generate 100k file into a repository:
#!/bin/bash
rm -rf .git test.file
git init
git config user.email a@b
git config user.name c
time for i in {1..100000}
do
[ $((i % 2)) -eq 1 ] && echo $i>test.file || echo 0 >test.file
git add test.file
git commit -m "$i committed"
done
I lost patience kicking off two hundred thousand processes. Try this:
for i in {1..100000}
do
echo "commit HEAD"
echo "committer c <a@b> $i +0000"
echo "data <<EOF"
echo "$i committed"
echo "EOF"
echo
done | git fast-import
which runs much faster. This doesn't change any files in each commit,
but I don't think it's necessary for what you're showing (name-rev
wouldn't ever look at the trees).
Run git on it:
$ git name-rev a20f6989b75fa63ec6259a988e38714e1f5328a0
Anybody who runs your script will get a different sha1 because of the
change in timestamps. I guess this is HEAD, though. I also needed to
have an actual tag to find. So:
git tag old-tag HEAD~99999
git name-rev HEAD
segfaults for me.
Could you coment on it?
This is a known issue. The algorithm used by name-rev is recursive, and
you can run out of stack space in some deep cases. There's more
discussion this thread:
https://public-inbox.org/git/6a4cbbee-ffc6-739b-d649-079ba01439ca@xxxxxxxxx/
including some patches that document the problem with an expected
failure in our test suite. Nobody has actually rewritten the C code yet,
though.
-Peff