Nicolas Pitre wrote:
On Wed, 18 Apr 2007, Rogan Dawes wrote:
Right. I would imagine that the script would have to take care of setting
timestamps in the filesystem appropriately, as well as passing them back to
git when queried.
e.g. expanding test.odf/: (since we store it as a directory)
git calls "odf.sh checkout test.odf/ <sha1> <perms> <stat>"
odf checkout calls back into git to find out the details of the files under
test.odf/, and creates a zip file containing the individual files, with
appropriate timestamps.
Why would you need to store the document as multiple files into Git?
The only reasons I can see for external filters are:
1) Normalization, e.g. the LF->CRLF thing.
Some might want to do keyword expansion which would fall into this
category as well.
2) Better archiving with Git's deltas.
That means storing files uncompressed into Git since Git will
compress them anyway, after significant space reduction due to
deltas which cannot occur on already compressed data.
So if your .odf file is actually a zip with multiple files, then all you
have to do is to convert that zip archive into a non compressed tar
archive on checkins, and the reverse transformation on checkouts. The
non compressed tar content will delta well, the Git archive will be
small, and no tricks with the index will be needed.
Or am I missing something?
Nicolas
Probably not! ;-)
I was just thinking that it would be easier to see diffs between
individual files, rather than between entries in a zip. But if we are
calling out to a specialized handler, the handler can do that just as
easily, and without the added complexity in the index, etc.
It also means that someone without the attributes and specialized
handler would not be able to use the file (if it is stored as a directory).
Clearly a bad idea! Just ignore me, I'm used to it! ;-)
Rogan
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html