On Wed, 3 Feb 2021, Zack Weinberg wrote:
Therefore I like the idea of merely relying on the atomicity of
file creation / file rename operations.
These files should reside inside the autom4te.cache directory. I would
not like to change all my scripts and Makefiles that do
rm -rf autom4te.cache
Agreed. The approach I'm currently considering is: with the
implementation of the new locking protocol, autom4te will create a
subdirectory of autom4te.cache named after its own version number, and
work only in that directory (thus preventing different versions of
autom4te from tripping over each other). Each request will be somehow
reduced to a strong hash and given a directory named after the hash
value. The existence of this directory signals that an autom4te
process is working on a request, and the presence of 'request',
'output', and 'traces' files in that directory signals that the cache
for that request is valid. If the directory for a request exists but
the output files don't, autom4te will busy-wait for up to some
definite timeout before stealing the lock and starting to work on that
request itself.
This seems like a good approach to me.
There is substantially less danger from independent reconfs (on the
same or different hosts) than there is from parallel jobs in the
current build deciding that something should be done and trying to do
it at the same time.
GNU make does have a way to declare that a target (or multiple
targets) is not safe for parallel use. This is done via a
'.NOTPARALLEL: target' type declaration.
Bob
--
Bob Friesenhahn
bfriesen@xxxxxxxxxxxxxxxxxxx, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Public Key, http://www.simplesystems.org/users/bfriesen/public-key.txt