Re: Git as a filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Martin Langhoff wrote:
> On 9/22/07, Dmitry Potapov <dpotapov@xxxxxxxxx> wrote:
>   
>> used to create the original file. So, if you put any .deb file in such
>> a system, you will get back a different .deb file (with a different SHA1).
>> So, aside high CPU and memory requirements, this system cannot work in
>> principle unless all users have exactly the same version of a compressor.
>>     
>
> Was thinking the same - compression machinery, ordering of the files,
> everything. It'd be a nightmare to ensure you get back the same .deb,
> without a single different bit.
>
> Debian packaging toolchain could be reworked to use a more GIT-like
> approach - off the top of my head, at least
>
>   - signing/validating the "tree" of the package rather than the
> completed package could allow the savings in distribution you mention,
> decouple the signing from the compression, and simplify things like
> debdiff
>
>   - git or git-like strategies for source packages
>   

Nightmare indeed.  I actually wrote a proof of concept for this idea for
gzip.

http://git.catalyst.net.nz/gw?p=git.git;a=shortlog;h=archive-blobs
(see also
http://planet.catalyst.net.nz/blog/2006/07/17/samv/xteddy_caught_consuming_rampant_amounts_of_disk_space)

I usually warn people that this undertaking is "slightly insane".

My implementation was designed to be called like "git-hash-object". 
What it did was look at the input stream, and detect quickly whether it
looked like a gzip stream.  If it was, it would decompress it and then
try to compress the first few blocks using different compression
libraries and settings to determine what settings were used.  If it
could find the right settings for the first meg or so, then it would
bank on the rest being identical as well, record which compressor and
what settings were used and write the uncompressed object, as well as
the information needed to reconstruct the gzip header, to a new type of
object called an "archive" object.  If the stream could not be
reproduced then it would save the raw stream instead.  For something
like a Debian archive, it is very likely that all compressed streams
will be reproducible, because they will almost all be compressed using
the same implementation of gzip.

For tar and .ar files, this can be slightly more deterministic of
course.  It doesn't even need to be particularly savvy of what all the
fields are - just locate the files in the .tar, write out a tree, and
then write a TOC that lists tree entries and contains any extra data (ie
headers, etc).

In hindsight, making a new object type was probably a mistake.  If I
were to re-undertake this I would not go down that path, though I'd
certainly consider using tag objects for the extra data, and throwing
them in the tree like submodules.  It would also be essential in a
"real" solution to bundle reference copies of the zlib and gzip
compressors (yes, their output streams differ with longer inputs and
even some short ones).

Sam.
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux