Nicolas Pitre <nico@xxxxxxx> writes: > On Mon, 10 Dec 2007, Jon Smirl wrote: > >> Running oprofile during my gcc repack shows this loop as the hottest >> place in the code by far. > > Well, that is kind of expected. > >> I added some debug printfs which show that I >> have a 100,000+ run of identical hash entries. Processing the 100,000 >> entries also causes RAM consumption to explode. > > That is impossible. If you look at the code where those hash entries > are created in create_delta_index(), you'll notice a hard limit of > HASH_LIMIT (currently 64) is imposed on the number of identical hash > entries. Well, impossible is a strong word to use with respect to code: bugs are possible. However, we have the assertion assert(packed_entry - (struct index_entry *)mem == entries); in create_delta_index and that makes a pretty strong guarantee that the culling of hash entries should be effective. So at least the overall _number_ of entries should be consistent. If there is a bug, it might be that they get garbled or something. It is also not clear how this could cause an explosion of RAM consumption in the loop. -- David Kastrup, Kriemhildstr. 15, 44793 Bochum - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html