Narrow clone implementation difficulty estimate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

We are considering using Git to manage a large set of mostly binary
files (large images, pdf files, open-office documents, etc). The
amount of data is such that it is infeasible to force every user
to download all of it, so it is necessary to implement a partial
retrieval scheme.

In particular, we need to decide whether it is better to invest
effort into implementing Narrow Clone, or partitioning and
reorganizing the data set into submodules (the latter may prove
to be almost impossible for this data set). We will most likely
develop a new, very simplified GUI for non-technical users,
so the details of both possible approaches will be hidden
under the hood.


After some looking around, I think that Narrow clone would probably involve:

1. Modifying the revision walk engine used by the pack generator to
allow filtering blobs using a set of path masks. (Handling the same
tree object appearing at different paths may be tricky.)

2. Modifying the fetch protocol to allow sending such filter
expressions to the server.

3. Adding necessary configuration entries and parameters to commands,
in order to allow using the new functionality.

4. Resurrecting the sparse checkout series and merging it with the
new filtering logic. Narrow clone must imply sparse checkout that
is a subset of the cloned paths.

5. Fixing all breakage that may be caused by missing blobs.

I feel that the last point involves the most uncertainty, and may also
prove the most difficult one to implement. However, I cannot judge the
actual difficulty due to an incomplete understanding of Git internals.


I currently see the following additional problems with this approach:

1. Merge conflicts outside the filtered area cannot be handled.
However, in the case of this project they are estimated to be
extremely unlikely.

2. Changing the filter set is tricky, because extending the watched
area requires connecting to the server, and requesting missing blobs.
This action appears to be mostly identical to initial clone with a
more complex filter. On the other hand, shrinking the area would leave
unnecessary data in the repository, which is difficult to reuse safely
if the area is extended back. Finally, editing the set without
downloading missing data essentially corrupts the repository.

3. One of the goals of using git is building a distributed mirroring
system, similar to gittorrent or mirror-sync proposals. Narrow clone
significantly complicates this because of incomplete data sets.
A simple solution may be restricting download to peers whose set is
a superset of what's needed, but that may cause the system to degrade
to a fully centralized one.


In relation to the last point, namely building a mirroring
network, I also had an idea that perhaps in the current state
of things bundles are more suited to it, because they can be
directly reused by many peers, and deciding what to put in
the bundle is not much of a problem for this particular project.
I expect that implementation of narrow bundle support should
not be much different from narrow clone.


Currently we are evaluating possibilities to approach this
problem, and would like to know if this analysis makes sense.
We are willing to contribute the results to the Git community
if/when we implement it.

Alexander
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]