On Thursday 08 July 2010, Theodore Tso wrote: > On Jul 7, 2010, at 1:45 PM, Jeff King wrote: > > And of course it's just complex, and I tend to shy away from > > complexity when I can. The question to me comes back to (1) > > above. Is massive clock skew a breakage that should produce a few > > incorrect results, or is it something we should always handle? > > Going back to the question that kicked off this thread, I wonder if > there is some way that cacheing could be used to speed up the all > cases, or at lest the edge cases, without imposing as much latency as > tracking the max skew? i.e., some thing like gitk's gitk.cache > file. For bonus points, it could be a cache file that is used by > both gitk and git tag --contains, git branch --contains, and git > name-rev. > > Does that sound like reasonable idea? Here's a quick-and-dirty POC which builds a mapping from commits to their children and stores it using git notes [1], and then uses that to implement 'git tag --contains <commit>' by traversing _forwards_ from <commit> and printing all tags we encounter along the way [2]. [1]: The attached "build_childnotes.py" script builds this mapping. Invoke as follows: git log --all --format="%H,%P" | ./build_childnotes.py | git fast-import [2]: The attached "git_tag_contains.py" script traverses the notes printing out tags along the way. Invoke it as follows: git_tag_contains.py <commit> The second script is way too slow, and really needs to use "git cat-file --batch" to not fork a process for every commit in history... ...Johan -- Johan Herland, <johan@xxxxxxxxxxx> www.herland.net
Attachment:
build_childnotes.py
Description: application/python
Attachment:
git_tag_contains.py
Description: application/python