Dear John Spencer, On Sun, 23 Mar 2014 01:24:54 +0100, John Spencer wrote: > there are many configure scripts out there that still check for things > that are standard since at least 10 years, and doing this extensively > and over and over (people building software themselves usually build > more than one package) consumes a lot of time (especially due to the > non-parallel nature of configure scripts). As one of the core developer of Buildroot (http://buildroot.org), a tool that automates the process of cross-compiling a potentially significant number of software components to generate embedded Linux systems, I am also very concerned about the speed of configure scripts. On a full build, the time taken by configure scripts is very often around 25% of the overall build time, if not more. The suggestion of using a cache is indeed very interesting. We actually tried it a couple of years ago, with a global cache shared by all packages we build. However, it turned out that several packages use the same autoconf variable name to store the result of tests that are not exactly the same, causing a number of issues. The proposal in the thread to use a pre-defined cache with only a selected number of known-safe entries looks like a good idea. However, what seems like the most important problem to me is the non-parallel nature of autoconf tests. With CPUs going more and more multi-core, the build part of packages (which is usually highly parallel) is going to get faster and faster comparatively to the configure part of packages (which is going to get stuck to a slow speed due to the non-parallel nature of autoconf). Wouldn't it be time to think about moving autoconf to a more parallel logic, where N tests could run in parallel in sub-processes? Thanks, Thomas -- Thomas Petazzoni, CTO, Free Electrons Embedded Linux, Kernel and Android engineering http://free-electrons.com _______________________________________________ Autoconf mailing list Autoconf@xxxxxxx https://lists.gnu.org/mailman/listinfo/autoconf