Re: [PATCH v2 4/4] bundle v3: the beginning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jeff King <peff@xxxxxxxx> writes:

> This interface comes from my earlier patches, so I'll try to shed a
> little light on the decisions I made there.
>
> Because this "external odb" essentially acts as a git alternate, we
> would hit it only when we couldn't find an object through regular means.
> Git would then make the object available in the usual on-disk format
> (probably as a loose object).
>
> So in most processes, we would not need to consult the odb command at
> all. And when we do, the first thing would be to get its "have" list,
> which would at most run once per process.
>
> So the per-object cost is really calling "get", and my assumption there
> was that the cost of actually retrieving the object over the network
> would dwarf the fork/exec cost.

OK, presented that way, the design makes sense (I do not know if
Christian's (revised) design and implementation does or not, though,
as I haven't seen it).

As "check for non-existence" is important and costly, grabbing
"have" once is a good strategy, just like we open the .idx files of
available packfiles.

>> >   - "<command> have": the command should output the sha1, size and
>> > type of all the objects the external ODB contains, one object per
>> > line.
>> 
>> Why size and type at this point is needed by the clients?  That is
>> more expensive to compute than just a bare list of object names.
>
> Yes, but it lets get avoid doing a lot of "get" operations.

OK, so it is more like having richer information in pack-v4 index ;-)

>> >   - "<command> put <sha1> <size> <type>": the command should then read
>> > from stdin an object and store it in the external ODB.
>> 
>> Is ODB required to sanity check that <sha1> matches what the data
>> hashes down to?
>
> I think that would be up to the ODB, but it does seem like a good idea.
>
> Likewise, I'm not sure if "get" should be allowed to return contents
> that don't match the sha1.

Yes, this is what I was getting at.  It would be ideal to come up
with a way to do the large-blob offload without resorting to hacks
(like LFS and annex where "the same object contents will always
result in the same object name" is deliberately broken), and "object
name must match what the data hashes down to" is a basic requirement
for that.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]