On Sat, Mar 02, 2019 at 09:05:24PM +0100, Johannes Schindelin wrote: > > That seems reasonable (regardless of whether it is in a script or in the > > Makefile). Another option is to use -maxdepth, but that involves > > guessing how deep people might actually put header files. > > It would also fail to work when somebody clones an unrelated repository > that contains header files in the top-level directory into the Git > worktree. I know somebody like that: me. Good point. By the way, "make hdr-check" already fails for me on master, as I do not have libgcrypt installed, and it unconditionally checks sha256/gcrypt.h. I wonder if this actually does point to hdr-check needing to be smarter about checking the headers that are actually used in compilation on your platform. Or if that file should just be added to the set of excluded headers. > > We should be able to add back $(GENERATED_H) as appropriate. I see you > > did it for the non-computed-dependencies case. Couldn't we do the same > > for $(LOCALIZED_C) and $(CHK_HDRS)? > > As you figured out, CHK_HDRS *specifically* excludes the generated > headers, and as I pointed out: LOCALIZED_C includes $(GENERATED_H) > explicitly. > > So I think we're good on that front. Yeah, agreed. > > > Likewise, we no longer include not-yet-tracked header files in `LIB_H`. > > > > I think that's probably OK. > > It does potentially make developing new patches more challenging. I, for > one, do not immediately stage every new file I add, especially not header > files. But then, I rarely recompile after only editing such a new header > file (I would more likely modify also the source file that includes that > header). > > So while I think this issue could potentially cause problems only *very* > rarely, I think that it would be a really terribly confusing thing if it > happened. > > But I probably worry too much about it? I think it's not ideal, but it's probably an acceptable tradeoff. The LIB_H list is used for three things: - hdr-check, which I'd think would generally be run periodically on a full tree to catch any new header breakages. But I dunno, maybe people want to run it as soon as they've written new code. - the .po generation, which generally is a separate workflow from writing new header files - the header-dependency fallback code. This is definitely the place where somebody just adding a new header file and running "make" might get bitten. But it only kicks in for ancient, crappy compilers that don't do dependency computation; so I think most developers would not be using it. (this is your cue to explain to me how some workflow involving MSVC does not compute dependencies, and I'm unknowingly dismissing a large portion developers ;) ). -Peff