Re: [Foundation-l] Wikipedia meets git

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Wow,
I am impressed.
Let me remind you of one thing,
most people are working on very small subsets of the data. Very few
people will want to have all the data, think about getting all the
versions from all the git repos, it would be the same.
My idea is for smaller chapters who want to get started easily, or
towns, regions to host their own branches of relevant data.
Given a world full of such servers, the sum would be great but the
individual branches needed at one time would be small.

mike

On Wed, Oct 21, 2009 at 9:49 PM, Bernie Innocenti <bernie@xxxxxxxxxxx> wrote:
> [cc+=git@xxxxxxxxxxxxxxx]
>
> El Wed, 21-10-2009 a las 08:43 -0400, Samuel Klein escribió:
>> That sounds like a great idea.  I know a few other people who have
>> worked on git-based wikis and toyed with making them compatible with
>> mediawiki (copying bernie innocenti, one of the most eloquent :).
>
> Then I'll do my best to sound as eloquent as expected :)
>
> While I think git's internal structure is wonderfully simple and
> elegant, I'm a little worried about its scalability in the wiki usecase.
>
> The scenario for which git's repository format was designed is "patch
> oriented" revision control of a filesystem tree. The central object of a
> git tree is the "commit", which represents a set of changes on multiple
> files. I'll disregard all the juicy details on how the changes are
> actually packed together to save disk space, making git's repository
> format amazingly compact.
>
> Commits are linked to each other in order to represent the history. Git
> can efficiently represent a highly non-linear history with thousands of
> branches, each containing hundreds of thousands revisions. Branching and
> merging huge trees is so fast that one is left wondering if anything has
> happened at all.
>
> So far, so good. This commit-oriented design is great if you want to
> track the history *the whole tree* at once, applying related changes to
> multiple files atomically. In Git, as well as most other version control
> systems, there's no such thing as a *file* revision! Git manages entire
> trees. Trees are assigned unique revision numbers (in fact, ugly sha-1
> hashes), and can optionally by tagged or branched at will.
>
> And here's the the catch: the history of individual files is not
> directly represented in a git repository. It is typically scattered
> across thousands of commit objects, with no direct links to help find
> them. If you want to retrieve the log of a file that was changed only 6
> times in the entire history of the Linux kernel, you'd have to dig
> through *all* of the 170K revisions in the "master" branch.
>
> And it takes some time even if git is blazingly fast:
>
>  bernie@giskard:~/src/kernel/linux-2.6$ time git log  --pretty=oneline REPORTING-BUGS  | wc -l
>  6
>
>  real   0m1.668s
>  user   0m1.416s
>  sys    0m0.210s
>
> (my laptop has a low-power CPU. A fast server would be 8-10x faster).
>
>
> Now, the English Wikipedia seems to have slightly more than 3M articles,
> with--how many? tenths of millions of revisions for sure. Going through
> them *every time* one needs to consult the history of a file would be
> 100x slower. Tens of seconds. Not acceptable, uh?
>
> It seems to me that the typical usage pattern of an encyclopedia is to
> change each article individually. Perhaps I'm underestimating the role
> of bots here. Anyway, there's no consistency *requirement* for mass
> changes to be applied atomically throughout all the encyclopedia, right?
>
> In conclusion, the "tree at a time" design is going to be a performance
> bottleneck for a large wiki, with no useful application. Unless of
> course the concept of changesets was exposed in the UI, which would be
> an interesting idea to explore.
>
> Mercurial (Hg) seems to have a better repository layout for the "one
> file at a time" access pattern... Unfortunately, it's also much slower
> than git for almost any other purpose, sometimes by an order of
> magnitude. I'm not even sure how well Hg would cope with a repository
> containing 3M files and some 30M revisions. The largest Hg tree I've
> dealt with is the "mozilla central" repo, which is already unbearably
> slow to work with.
>
> It would be interesting to compare notes with the other DSCM hackers,
> too.
>
> --
>   // Bernie Innocenti - http://codewiz.org/
>  \X/  Sugar Labs       - http://sugarlabs.org/
>
> --
> To unsubscribe from this list: send the line "unsubscribe git" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]