On Wed, Sep 13, 2006 at 04:42:01PM -0700, Keith Packard wrote: > However, this means that parsecvs must hold the entire tree state in > memory, which turned out to be its downfall with large repositories. > Worked great for all of X.org, not so good with Mozilla. Does anyone know how big Mozilla (or other humonguous repos, like KDE) are, in terms of number of files? A few numbers for repositories I had lying around: Linux kernel -- ~21,000 gcc -- ~42,000 NetBSD "src" repo -- ~100,000 uClinux distro -- ~110,000 These don't seem very indimidating... even if it takes an entire kilobyte per CVS revision to store the information about it that we need to make decisions about how to move the frontier... that's only 110 megabytes for the largest of these repos. The frontier sweeping algorithm only _needs_ to have available the current frontier, and the current frontier+1. Storing information on every version of every file in memory might be worse; but since the algorithm accesses this data in a linear way, it'd be easy enough to stick those in a lookaside table on disk if really necessary, like a bdb or sqlite file or something. (Again, in practice storing all the metadata for the entire 180k revisions of the 100k files in the netbsd repo was possible on a desktop. Monotone's cvs_import does try somewhat to be frugal about memory, though, interning strings and suchlike.) -- Nathaniel -- When the flush of a new-born sun fell first on Eden's green and gold, Our father Adam sat under the Tree and scratched with a stick in the mould; And the first rude sketch that the world had seen was joy to his mighty heart, Till the Devil whispered behind the leaves, "It's pretty, but is it Art?" -- The Conundrum of the Workshops, Rudyard Kipling - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html