Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx> writes: > On Wed, 3 Oct 2007, Jeff King wrote: >> >> Try profiling the code, and you will see that the creation of the hashes >> is totally dwarfed by the comparisons. So yes, you might be able to >> speed up the creation code, but it's going to have a minimal impact on >> the overall run time. > > Yeah. Oprofile is your friend. Well, and if -Oprofile has no strong opinion, I'd let wc -l pitch in. When we are not actually going to use the hash tables as hash tables, why create them as such? If the first thing that actually looks at the values of the hashes (except possibly for the optimization of not storing the same hash twice in succession) is the sort, then there is no code that can go out of whack when confronted with degenerate data. Maybe it's not much of an optimization, but it certainly should be a cleanup. -- David Kastrup, Kriemhildstr. 15, 44793 Bochum - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html