Hi all, in my current continuous integration environment we run configure scripts on the order of 10^5+ times a day. Each build starts in a clean workspace that is almost akin to a fresh Linux install. We don't ship a config.site, and we build many packages in parallel. I tried to use the "same" --cache-file to speed things up. I wrap my ./configure invocations with a small dance that can be summarized like this: if test -f /tmp/config.cache; then # We already have something TMP_CACHE=`mktemp` cp /tmp/config.cache $TMP_CACHE fi if ./configure --cache-file=$TMP_CACHE [..other flags..]; then # Success mv $TMP_CACHE /tmp/config.cache fi rm -f $TMP_CACHE This way each run of configure operates on its own file to avoid race conditions, but benefits from the cache variables populated by previous runs. Of course in practice the dance above is also written in a way to be atomic, the code above is a simplification. This works fine except for one thing: if one package decides to invoke configure with different CFLAGS/CXXFLAGS (or any other precious variable for that matter), I get the annoying: configure: loading cache /tmp/configure.cache.ea93QN configure: error: `CFLAGS' has changed since the previous run: configure: former value: `-g' configure: current value: `-g -Wall -Werror' In http://lists.gnu.org/archive/html/autoconf/2011-03/msg00042.html someone asked whether there is a programatic way to do something when this situation arises – without duplicating all the code in configure to catch this case, obviously. I'm not aware of any, but I'm looking for a workaround. Other than parsing's configure output when it errors out and iterating again after patching the config.cache file, which would be a very ugly kludge. Any suggestions? Thanks in advance. -- Benoit "tsuna" Sigoure _______________________________________________ Autoconf mailing list Autoconf@xxxxxxx https://lists.gnu.org/mailman/listinfo/autoconf