Hello! Thanks for the links of overall programmer requirements, been reading such so far as well. The hacking guide was fresh compared to what I discovered. Now the reason of specific detail understanding of the Git was to use it on wide-range software-code-block distribution. I would like to know if the following scenario either will "blow up" something or has some other anti-Git patterns hidden inside. 1. Git repositories vs. "traditional" branch/merge Within the current spec, we have identified "branching/merging" that is VCS supported conflict management only feasible option. With Git this seems to turn to forked/linked independent repositories; this seems the best option. Every independent software-block would have an independent root-repository of its own. Any reason why Git would be using branching at all in distributing between-project data, even when in traditional VCS they would be within same repository? 2. Scalability, when having tiny repositories that are chained to the end-developer Basically none of the "bottom" is going to push back any changes, so they're effectively just readers, and also from the nearest repository; the repository syncing (while likely can be automated out) will go with normal pulls. I'm not worrying about "seconds to pull", but moreover is there something architectural structure, that will cause single-bottleneck. Is there any scalability issue, if there are say 10 levels from root repository to the end-user-bottom (in between various feature-adding or modifying tailoring), and total number of children of the bottom level grows to hundreds of thousands? 3. Distribution of catalogues, registering "Abstractions" Registering of new "Abstractions" from any provider (be it major organization or single consultant) would be dealt with specific "distribution" repository chain. Pushing the "Registration Request" within the catalogue would be fullfilled by validating the data in the claimed repository. The format of the data is strongly schema based XML all the way. The openly used "root" catalogue would be synced globally (daily or so, not immediatelly); the dedicated "private" or other independent catalogue providers would be in charge of their policies. Are there any experience on this kind of use for Git? 4. Handling "private" catalogues; repository access filtered with client-access-control Private catalogues would be handled in a way, that their existence/registration would be standard-pushed up to the chain. However all the information of their data and their content would be contained only within the repository. Hence, only the clients that have the access to connect to the repository could fetch the actual catalogue of information. They would cache their catalogues with same manner as would the root catalogue. Even if the clients would get improper private entires in their connection lists, they would only get any real information out, if their credentials authorized on the target repository. Any blockers that come in mind with this? Kalle -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html