Should sparse checkout be extended to clone, fetch, commit, and merge?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

Sparse checkout is a great start but I wish that clone, fetch, commit,
and merge also had sparse functionality to skip operating on objects
associated with directories not listed in the sparse-checkout (or
similar) configuration file. To clarify, this is different from
shallow clone or fetch which skips old commits but still fetches the
objects associated to every file in the tree.

This functionality would be very useful when dealing with large
repositories. Sparse checkout helps, but it doesn't reduce at all the
amount of traffic and the amount of disk spaces needed for the .git
repository. git submodule is an alternative, but ideally, one wouldn't
need to break up a repository as it grows and use git submodule.

Are there any plans to implement this? And if not, why not? How much
effort would be to implement this?

I don't see a conceptual reason why this can't be done. In particular,
it is not hard to see that making a commit after a sparse clone/fetch
is still possible even though one does not know the content of each
file in the repository. For example, assume a repository has two
directories /A and /B at the root and a user fetched only the objects
in /B to make a change to a file in /B. git could compute the new hash
of the root to make a commit by computing the new hash for /B and
simply using the old hash for /A without the need to know the content
of /A.

Cheers,
    Matthias



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux