Re: Thoughts about memory requirements in traversals [Was: Re: [RFC] Submodules in GIT]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On Sun, 3 Dec 2006, Josef Weidendorfer wrote:
> 
> Thinking about this...
> You have to make very sure to always update the caching layer containing
> the backlinks on every addition of a further object. You can do this
> because you always reached this new object by some other object, which
> exactly is the backpointer.

You're missing the big issue.

The issue is that a cache like that would ABSOLUTELY SUCK.

You could speed up the non-common operations with it, but:

 - any changes would become a LOT more expensive to do, because they all 
   need to update every single object they add (ie a "commit" would now 
   have to add backpointers TO EVERY SINGLE BLOB).

   Imagine what this does to something like the kernel, where a commit 
   reaches 22,000 files!

   You can do it at a finer granularity (ie do just the direct backlinks 
   and only do the "tree->blob" and "tree->tree" things rather than the 
   full commit reachability, but it's still going to be MUCH more painful 
   than what we do now.

 - the cache would be a lot bigger than the current pack-files, and it 
   would be fragile as hell to boot. Because it needs to get rewritten for 
   every operation, it gets corrupted much more easily, and that's 
   ignoring things like race conditions, so it would now need a ton of 
   locking that git simply doesn't do at all.

 - everything would basically slow down.

 - you couldn't do shared object databases AT ALL, because backpointers 
   wouldn't work. The whole _reason_ you can share object databases is the 
   same reason we can't have backpointers: objects are immutable and never 
   change depending on circustances.

The _only_ downside of the current situation is literally the 24 or 28 
bytes per object that we look at. For most operations, we don't even look 
at that many objects, so it's really the worst-case things.

> In fact, this "cache" can be created with a usual object traversal
> (which has the original memory requirement), but as long as we do
> not add objects to the database, further traversals would only need
> a fraction of memory.

Right. If the project is totally read-only, the cache would work well.

For real development, it would SUCK. It would make things like "git reset" 
very expensive indeed, for example (you'd have to unwind the whole cache: 
either regenerating it - which would take minutes - or being very careful 
indeed and being able to always remove objects properly and keeping track 
of them 100%).

IOW, it's nasty nasty nasty. And it doesn't really even help anything but 
a case that we actually already handle really well (I spent a lot of 
effort on making the memory footprint minimal).

But it does mean that you do NOT want to traverse a hundred different 
project "as if" they were one. That's really the only thing it means.

And since you can do submodules as independent projects, and you SHOULD do 
them that way for tons of other reasons _anyway_, even that isn't a reason 
to screw up all the _wonderful_ properties of the git object database.

So what I'm trying to say is that the immutable non-backpointer nature of 
the git database is what makes it so WONDERFUL. It's efficient, it's 
dense, it's stable, and it allows us all the clever things we do. But it 
means that we do end up alway spending 28 bytes per object, and we can 
never throw those 28 bytes away during a single "traversal" run.

		Linus
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]