Re: [PATCH v2 4/4] bundle v3: the beginning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 07, 2016 at 12:23:40PM -0700, Junio C Hamano wrote:

> Christian Couder <christian.couder@xxxxxxxxx> writes:
> 
> > Git can store its objects only in the form of loose objects in
> > separate files or packed objects in a pack file.
> > To be able to better handle some kind of objects, for example big
> > blobs, it would be nice if Git could store its objects in other object
> > databases (ODB).
> >
> > To do that, this patch series makes it possible to register commands,
> > using "odb.<odbname>.command" config variables, to access external
> > ODBs. Each specified command will then be called the following ways:
> 
> Hopefully it is done via a cheap RPC instead of forking/execing the
> command for each and every object lookup.

This interface comes from my earlier patches, so I'll try to shed a
little light on the decisions I made there.

Because this "external odb" essentially acts as a git alternate, we
would hit it only when we couldn't find an object through regular means.
Git would then make the object available in the usual on-disk format
(probably as a loose object).

So in most processes, we would not need to consult the odb command at
all. And when we do, the first thing would be to get its "have" list,
which would at most run once per process.

So the per-object cost is really calling "get", and my assumption there
was that the cost of actually retrieving the object over the network
would dwarf the fork/exec cost.

I also waffled on having git cache the output of "<command> have" in
some fast-lookup format to save even the single fork/exec. But I figured
that was something that could be added later if needed.

You'll note that this is sort of a "fault-in" model. Another model would
be to treat external odb updates similar to fetches. I.e., we touch the
network only during a special update operation, and then try to work
locally with whatever the external odb has. IMHO this policy could
actually be up to the external odb itself (i.e., its "have" command
could serve from a local cache if it likes).

> >   - "<command> have": the command should output the sha1, size and
> > type of all the objects the external ODB contains, one object per
> > line.
> 
> Why size and type at this point is needed by the clients?  That is
> more expensive to compute than just a bare list of object names.

Yes, but it lets get avoid doing a lot of "get" operations. For example,
in a regular diff without binary-diffs enabled, we can automatically
determine that a diff will be considered binary based purely on the size
of the objects (related to core.bigfilethreshold). So if we know the
sizes, we can run "git log -p" without faulting-in each of the objects
just to say "woah, that looks binary".

One can accomplish this with .gitattributes, too, of course, but the
size thing just works out of the box.

There are other places where it will come in handy, too. E.g., fscking a
tree object you have, you want to make sure that the object referred to
with mode 100644 is actually a blob.

I also don't think the cost to compute size and type on the server is
all that important. Yes, if you're backing your external odb with a git
repository that runs "git cat-file" on the fly, it is more expensive.
But in practice, I'd expect the server side to create a static manifest
and serve it over HTTP (this also gives the benefit of things like
ETags).

> >   - "<command> get <sha1>": the command should then read from the
> > external ODB the content of the object corresponding to <sha1> and
> > output it on stdout.
> 
> The type and size should be given at this point.

I don't think there's a reason not to; I didn't here because it would be
redundant with what Git already knows from the "have" manifest it
receives above.

> >   - "<command> put <sha1> <size> <type>": the command should then read
> > from stdin an object and store it in the external ODB.
> 
> Is ODB required to sanity check that <sha1> matches what the data
> hashes down to?

I think that would be up to the ODB, but it does seem like a good idea.

Likewise, I'm not sure if "get" should be allowed to return contents
that don't match the sha1. That would be fine for things like "diff",
but would probably make "fsck" unhappy.

> If this thing is primarily to offload large blobs, you might also
> want not "get" but "checkout <sha1> <path>" to bypass Git entirely,
> but I haven't thought it through.

My mental model is that the external odb gets the object into the local
odb, and then you can use the regular streaming-checkout code path. And
the local odb serves as your cache.

That does mean you might have two copies of each object (one in the odb,
and one in the working tree), as opposed to a true cacheless system,
which can get away with one.

I think you could do that cacheless thing with the interface here,
though; the "get" operation can stream, and you can stream directly to
the working tree.

-Peff
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]