Hi, On Wed, Feb 16, 2005 at 12:57:20PM -0500, Dan Manthey wrote: > Well, trivially, it's possible to run an arbitrary number of tests > together and see if _none_ of them fail: `cc foo.c bar.c' and then only > run them separately if there is a failure. it might also be possible with > some known compilers (e.g. gcc) to grep the error output for file names. > i.e., early on try `cc -c okay1.c okay2.c' to see if you can compile multiple > files, then try `cc -c okay.c bad.c' and grep for the names `okay' and > `bad' to see if you can check where the error is. Even if you can't > figure out which file has an error, it could speed up the common case of > no errors. when I was thinking about the issue, I imagined several #include's in one file, to detect the presence of headers. We both come to the same conclusion: we can speed up the common case when all headers are present, but we cannot get several independent bits of info. Even if we were able to parse the error messages, there are still other problems: 1) what if a broken header screws the preprocessor, eg. with an unmatched #if? (Similarily for error recovery in the compiler.) 2) what about header conflicts? 3) What if the compiler stops after 10 errors? I also think that the m4 code, which would be gathering the things to test for, could be too complicated. The only thing which seems to be doable is that we prepare a collection of features which are usually available by hand. Then we can write an aggregate test, which would run fast and prove that all these features are available. Then it would set all the cache variables. Or we could have several such collections. That test would be run near the beginning of the script. We can also implement certain algorithm to decide whether we should include the test (eg. if at least five of the features there were used in the whole configure script). I could help with the m4 part of implementation, but I cannot design the sets to be aggregated. I'm no expert in ``systemology'' or ``portability''. Paul, are you willing to prepare such sets of common features? On a related topic: I'm going to propose a new gnulib module: "mbsupport" which would #define MBS_SUPPORT iff the host has a usable multibyte support. (The idea comes from Arnold Robbins. See also the GNU grep CVS.) mbsupport.h contains: #if defined(HAVE_ISWCTYPE) \ && defined(HAVE_LOCALE_H) \ && defined(HAVE_MBRLEN) \ && defined(HAVE_MBRTOWC) \ && defined(HAVE_WCHAR_H) \ && defined(HAVE_WCRTOMB) \ && defined(HAVE_WCSCOLL) \ && defined(HAVE_WCTYPE) \ && defined(HAVE_WCTYPE_H) \ && (defined(HAVE_STDLIB_H) && defined(MB_CUR_MAX)) \ /* We can handle multibyte strings. */ # define MBS_SUPPORT 1 #else # undef MBS_SUPPORT #endif So configure.ac has to do this: AC_CHECK_HEADERS(locale.h wchar.h wctype.h) AC_CHECK_FUNCS(iswctype mbrlen wcrtomb wcscoll wctype) # We can use mbrtowc only if we have mbstate_t. AC_FUNC_MBRTOWC Isn't this a good opportunity to create an aggregated test? (The situation here might be a bit different here, though. We don't care about the individual results, so we could create a specialized test here.) Have a nice day, Stepan _______________________________________________ Autoconf mailing list Autoconf@xxxxxxx http://lists.gnu.org/mailman/listinfo/autoconf