It appears that almost all of the locking calls in the current code use hold_lock_file_for_update() which translates
into a request with zero timeout.
This effectively means that for certain classes of usage, you can't use git concurrently without either external locking
or retry logic. It would be nice to see a global option "--lock-timeout" that would request a specific non-zero default
timeout for many of those operations.
Even having the option to have a couple-second timeout would eliminate most typical concurrency issues, simplifying some
automated use cases.
Horrible/contrived example, but demonstrates the issue:
for f in `seq 1 150`; do touch $f; (git add $f &); done
You'll get a whole bunch of:
fatal: Unable to create '/tmp/dummy/.git/index.lock': File exists.
-- Nathan
------------------------------------------------------------
Nathan Neulinger nneul@xxxxxxxxxxxxx
Neulinger Consulting (573) 612-1412
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html