Thanks. This approach may be the most appropriate for the maintenance track, but for the purpose of going forward, I wonder if we really want to keep the "estimate and allocate a large pool, and carve out individual pieces". This bulk-allocate dates back to the days when we didn't have ondisk vs incore representation differences, IIRC, and as the result we deliberately leak cache entries whenever an entry in the index is replaced with a new one. Does the overhead to allocate individually really kill us that much for say a tree with 30k files in it? -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html