Re: Multiblobs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Many thanks for the clear and evidently very well thought answer.
I wonder if I can take another minute of your (and Avery, and anybody else who is interested) time to feed a little more my curiosity. And I apologize in advance for possible mistakes in my understanding of git internals.

Jeff King wrote:

And for both of those cases, the upside is a speed increase, but the
downside is a breakage of the user-visible git model (i.e., blobs get
different sha1's depending on how they've been split).
Is this different from what happens with clean/smudge filters? I wonder what hash does a cleanable object get. The hash of its cleaned version or its original hash? If it is the first case, the hash can change if the filter is used/not-used or slightly modified, so I wonder if an enhanced "clean" filter capable of splitting an object into a multiblob would be different in this sense. If it gets the original hash, again I wonder if an enhanced "clean" filter capable of splitting an object into a multiblob could not do the same.
 But being two
years wiser than when I wrote the original message, I don't think that
breakage is justified. Instead, you should retain the simple git object
model, and consider on-the-fly content-specific splits. In other words,
at rename (or delta) time notice that blob 123abc is a PDF, and that it
can be intelligently split into several chunks, and then look for other
files which share chunks with it. As a bonus, this sort of scheme is
very easy to cache, just as textconv is. You cache the smart-split of
the blob, which is immutable for some blob/split-scheme combination. And
then you can even do rename detection on large blob 123abc without even
retrieving it from storage.
Now I see why for things like diffing, showing textual representations or rename detection caching can be much more practical. My initial list of "potential applications" was definitely too wide and vague.
Another benefit is that you still _store_ the original (you just don't
look at it as often).
... but of course if you keep storing the original, I guess there is no advantage in storage efficiency.
Which means there is no annoyance with perfectly
reconstructing a file. I had originally envisioned straight splitting,
with concatenation as the reverse operation. But I have seen things like
zip and tar files mentioned in this thread. They are quite challenging,
because it is difficult to reproduce them byte-for-byte.
I agree, but this is already being done. For instance on odf and zip files, by using clean filters capable of removing compression you can greatly improve the storage efficiency of the delta machinery included in git. And of course, to re-create the original file is potentially challenging. But most time, it does not really matter. For instance, when I use this technique with odf files, I do not need to care if the smudge filter recreates the original file or not, the important thing is that it recreates a file that can then be cleaned to the same thing (and this makes me think that cleanable objects get the sha1 of the cleaned blob, see above).

In other terms, all the time we underline that git is about tracking /content/. However, when you have a structured file, and you want to track its /content/, most time you are not interested at all at the /envelope/ (e.g. the compression level of the odf/zip file): the content is what is inside (typically a tree-structured thing). And maybe scms could be made better at tracking structured files, by providing an easy way to tell the scm how to discard the envelope.

In fact, having the hash of the structured file only depend on its real content (the inner tree or list of files/streams/whatever), seems to me to be completely respectful of the git model. This is why I originally thought that having enhanced filters enabling the storage of the the inner matter of a structured file as a multiblob could make sense.
The other application I saw in this thread is structured files where you
actually _want_ to see all of the innards as individual files (e.g.,
being able to do "git show HEAD:foo.zip/file.txt"). And for those, I
don't think any sort of automated chunking is really desirable. If you
want git to store and process those files individually, then you should
provide them to git individually. In other words, there is no need for
git to know or care at all that "foo.zip" exists, but you should simply
feed it a directory containing the files. The right place to do that
conversion is either totally outside of git, or at the edges of git
(i.e., git-add and when git places the file in the repository).
Originally, I thought of creating wrappers for some git commands. However, things like "status" or "commit -a" appeared to me quite complicated to be done in a wrapper.
 Our
current hooks may not be sufficient, but that means those hooks should
be improved, which to me is much more favorable than a scheme that
alters the core of the git data model.
Having a sufficient number of hooks could help a lot. However, if I remember properly, one of the reasons why the clean/smudge filters were introduced was to avoid the need to implement a similar functionality with hooks.


Thanks in advance for the further explanations that might come!

Sergio
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]