Johannes Sixt wrote: > Am 10/7/2010 22:28, schrieb Jonathan Nieder: >> | For a command (like filter-branch --subdirectory-filter) that wants >> | to commit a lot of trees that already exist in the object db, writing >> | undeltified objects as loose files only to repack them later can >> | involve a significant amount[*] of overhead. > > 1. But when an object already exists in the db, it won't be written again, > will it? In David's application, the trees already exist, but the commits are new. > 2. Even though fast-import puts all (new) objects into a pack file, the > pack is heavily sub-optimal, and you should repack -f anyway. So what's > the point? Only to avoid a loose object? To avoid thousands of loose objects. > (I'm not saying that the patch is unwanted, but only that the > justification is still not sufficiently complete.) No problem - these questions are useful. If the result is learning that something else is responsible for the speedup David observed in his script, that would not be a bad outcome after all. I suppose supporting M 040000 <tree> "" and C <path> "" could still be a good idea in that case anyway, for the convenience of front-end authors. Jonathan who still hasn't reviewed the patch (sorry) -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html