Am 24.10.2011 09:07, schrieb Junio C Hamano: > Thanks. > > This approach may be the most appropriate for the maintenance track, but > for the purpose of going forward, I wonder if we really want to keep the > "estimate and allocate a large pool, and carve out individual pieces". > > This bulk-allocate dates back to the days when we didn't have ondisk vs > incore representation differences, IIRC, and as the result we deliberately > leak cache entries whenever an entry in the index is replaced with a new > one. Does the overhead to allocate individually really kill us that much > for say a tree with 30k files in it? Probably not; unpack_trees() does that already. (It calls create_ce_entry() via unpack_nondirectories() via unpack_callback() via traverse_trees()). René -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html