On Wed, 18 Apr 2007, Rogan Dawes wrote: > Right. I would imagine that the script would have to take care of setting > timestamps in the filesystem appropriately, as well as passing them back to > git when queried. > > e.g. expanding test.odf/: (since we store it as a directory) > > git calls "odf.sh checkout test.odf/ <sha1> <perms> <stat>" > > odf checkout calls back into git to find out the details of the files under > test.odf/, and creates a zip file containing the individual files, with > appropriate timestamps. Why would you need to store the document as multiple files into Git? The only reasons I can see for external filters are: 1) Normalization, e.g. the LF->CRLF thing. Some might want to do keyword expansion which would fall into this category as well. 2) Better archiving with Git's deltas. That means storing files uncompressed into Git since Git will compress them anyway, after significant space reduction due to deltas which cannot occur on already compressed data. So if your .odf file is actually a zip with multiple files, then all you have to do is to convert that zip archive into a non compressed tar archive on checkins, and the reverse transformation on checkouts. The non compressed tar content will delta well, the Git archive will be small, and no tricks with the index will be needed. Or am I missing something? Nicolas - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html