On Fri, Mar 13, 2020 at 12:18 PM Junio C Hamano <gitster@xxxxxxxxx> wrote: > > "Elijah Newren via GitGitGadget" <gitgitgadget@xxxxxxxxx> writes: > > > From: Elijah Newren <newren@xxxxxxxxx> > > > > Several tests wanted to verify that files were actually modified by a > > merge, which it would do by checking that the mtime was updated. In > > order to avoid problems with the merge completing so fast that the mtime > > at the beginning and end of the operation was the same, these tests > > would first set the mtime of a file to something "old". This "old" > > value was usually determined as current system clock minus one second, > > truncated to the nearest integer. Unfortunately, it appears the system > > clock and filesystem clock are different and comparing across the two > > runs into race problems resulting in flaky tests. > > Good observation (and if we were doing networked filesystems, things > would be worse). > > > So, instead of trying to compare across what are effectively two > > different clocks, just avoid using the system clock. Any new updates to > > files have to give an mtime at least as big as what is already in the > > file, so define "old" as one second before the mtime found in the file > > before the merge starts. > > Is there a reason why we prefer as small an offset as possible? I > am not objecting to the choice of 1 second, but am curious if > anything bad happens if we used a larger offset, say, 2 hours. I was thinking about putting in some magic larger number, but was wondering if I needed to explain it or if people might spend cycles thinking about the significance of the random number selected. However, a larger value might be useful in the face of leap seconds and ntp time updates, so I should probably move that direction. Any preferences on whether I should I pick something like 3600 (large but easily recognizable), something more round like 10000, or something else?