Hi Olaf, all, * Olaf Lenz wrote on Wed, Feb 09, 2011 at 03:54:31PM CET: > Frankly, I do not really understand the fuzz about configure running for > a few tens of seconds. After all, you usually have to do it only once if > you are a user. If you are a developer, you have to run it only whenever > a new file is added. Is that so much of a pain? Well, if you are a distribution, then you might have to run literally thousands of configure scripts, at least one per autotooled package. It might be possible to build packages in parallel, but that requires more infrastructure etc. As another example, a full bootstrap of GCC with all languages enabled includes more than 60 configure runs. The 'make' and 'make -k check' stages of GCC parallelize fairly well. So yes, there are situations where it really matters, especially if you have a 48-way big honk build machine or a distcc build farm. Just out of efficiency considerations ("Green computing" for buzzword fans) we should try to minimize overhead. > And about the size of the configure script of a few MB, is that really > any problem in a time where we have GBs of memory and TB of hard disk space? I hope that it's becoming less and less of an actual problem; with the move to shell functions we've already saved some space. We can probably go a bit further in rationalizing things. But they won't ever be really really small. As to choice of shell: it is possible to override configure today with CONFIG_SHELL=/bin/foosh /bin/foosh ./configure I know it isn't pretty. FWIW, if we were to parallelize in shell, then bash is probably the best option yet, because all the other shells showed problems with the parallel Autotest. Except zsh, but that isn't particularly faster either. Cheers, Ralf _______________________________________________ Autoconf mailing list Autoconf@xxxxxxx http://lists.gnu.org/mailman/listinfo/autoconf