On Tue, 12 Feb 2008, Linus Torvalds wrote: > > (b) we compare each object to the "window-1" preceding objects, which is > how I got the O(windowsize^2) That's not really true, of course. But my (broken and inexact) logic is that we get one cost multiplier from the number of objects, and one from the size of the objects. So *if* we have the situation of not limiting the window size, we basically have a big slowdown from raising the window in number of objects: not only do we get a slowdown from comparing more objects, we spend relatively more time comparing the *large* ones to begin with and having more of them just makes it even more skewed - when we hit a series of big blocks, the window will also contain more big blocks, so it kind of a double whammy. But I don't think calling it O(windowsize^2) is really correct. It's still O(windowsize), it's just that the purely "number-of-object" thing doesn't account for big objects being much more expensive to diff. So you really want to make the *memory* limiter the big one, because that's the one that actually approximates how much time you end up spending. So ignore that O(n^2) blather. It's not correct. What _is_ correct is that we want to aggressively limit memory size, because CPU cost goes up linearly not just with number of objects, but also super-linearly with size of the object ("super-linear" due to bad cache behavior and in worst case due to paging). Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html