On Thu, 30 Mar 2023, Thomas Jahns wrote:
speed up configure scripts is much narrower. Also I think having
quicker turnaround in the usual development environment is still
very much valuable, if seldom used deployment targets still are
fully functional. If one compares what the original ksh does in
terms of temporary files and forked processes to how a modern zsh or
bash runs the very same script, one can find there's also historical
precedent for the kind of optimization I'm proposing.
I know that modern ksh93 (https://github.com/att/ast) provides
built-in implementations of POSIX utilities and maps common paths to
the built-in implementations. Does bash do that as well?
Obviously excessive forks, temporary files, using a real disk, etc.,
cause a slow-down.
One would think that a "compiler" test should be cacheable given the same compiler with similar options.
That's certainly true for an unchanged configure scripts running
from the same initial state (arguments, environment variables etc.)
and can probably be used to good effect in e.g. CI/CD setups. For
tools like spack, where probably a rebuild almost always means that
some aspect of the environment has changed too (OS update, changed
flags, newer compiler version) I'm not so sure test results can
safely be cached. At least not without keeping track of
substantially more state than current autoconf caching provides.
It would require some sort of a high-speed database (maybe residing in
a co-process or a file-based database) to store known results and a
fast hashing strategy to identify when inputs are the same as a known
result. It would only help if the cache-lookups are less expensive
than the actual tests.
Bob
--
Bob Friesenhahn
bfriesen@xxxxxxxxxxxxxxxxxxx, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Public Key, http://www.simplesystems.org/users/bfriesen/public-key.txt