Han-Wen Nienhuys <hanwen@xxxxxxxxxx> writes: >> Is this because we have been assuming that in step 5. we can >> "overwrite" (i.e. take over the name, implicitly unlinking the >> existing one) the existing 0000001-00000001.ref with the newly >> prepared one, which is not doable on Windows? > > No, the protocol for adding a table to the end of the stack is > impervious to problems on Windows, as everything happens under lock, > so there is no possibility of collisions. > >> We must prepare for two "randoms" colliding and retrying the >> renaming step anyway, so would it make more sense to instead >> use a non-random suffix (i.e. try "-0.ref" first, and when it >> fails, readdir for 0000001-00000001-*.ref to find the latest >> suffix and increment it)? > > This is a lot of complexity, and both transactions and compactions can > always fail because they fail to get the lock, or because the data to > be written is out of date. So callers need to be prepared for a retry > anyway. Sorry, are we saying the same thing and reaching different conclusions? My question was, under the assumption that the callers need to be prepared for a retry anyway, (1) would it be possible to use "seq" (or "take max from existing and add one") as the random number generator for the ${random} part of your document, and (2) if the answer to the previous question is yes, would it result in a system that is easier for Git developers, who observe what happens inside the .git directory, to understand the behaviour of the system, as they can immediately see that 1-1-47 is newer than 1-1-22 instead of 1-1-$random1 and 1-1-$random2 that cannot be compared?