On Tue, May 01, 2007 at 05:24:54PM -0700, Linus Torvalds wrote: > The only thing that was gcc-specific about it is that gcc goes to some > extreme lengths to turn the constant-sized division into a sequence of > shifts/multiples/subtracts, and can often turn a division into something > like ten cheaper operations instead. > > But that optimization was also what made gcc take such a long time if the > constant division is very common (ie a header file with an inline > function, whether that function is actually _used_ or not apparently > didn't matter to gcc) There used to be a Dumb Algorithm(tm) in there - basically, gcc went through all decompositions of constant it was multiplying at and it did it in the dumbest way possible (== without keeping track of the things like "oh, we'd already looked at the best way to multiply by 69, no need to redo that again and again)"). _That_ is what blew the build time to hell for a while. And yes, that particular idiocy got fixed (they added a moderately-sized LRU of triplets <constant, price, best way to multiply on it>, which killed recalculation from scratch for each pointer subtraction and got complexity of dealing with individual subtraction into tolerable range). That's separate from runtime issues, of course. - To unsubscribe from this list: send the line "unsubscribe linux-sparse" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html