On Tue, Sep 7, 2021 at 11:44 PM Junio C Hamano <gitster@xxxxxxxxx> wrote: > > Neeraj Singh <nksingh85@xxxxxxxxx> writes: > > > BTW, I updated the github PR to enable batch mode everywhere, and all > > the tests passed, which is good news to me. > > I doubt that fsyncObjectFiles is something we can reliably test in > CI, either with the new batched thing or with the original "when we > close one, make sure the changes hit the disk platter" approach. So > I am not sure what conclusion we should draw from such an experiment, > other than "ok, it compiles cleanly." After all, unless we cause > system crashes, what we thought we have written and close(2) would > be seen by another process that we spawn after that, with or without > sync, no? The main failure mode I was worried about is that some test or other part of Git is relying on a loose object being immediately available after it is added to the ODB. With batch mode, the loose objects aren't actually available until the bulk checkin is unplugged. I agree that it is not easy to test whether the data is actually going to durable storage at the expected time. FWIW, I did take a disk IO trace on Windows to verify that we are issuing disk writes and flushes at the right time. But that's a one-time test that would be hard to make automated.