On Wed, Oct 26, 2016 at 09:23:21AM -0700, Junio C Hamano wrote: > >> + /* Might the failure be due to O_NOATIME? */ > >> + if (errno != ENOENT && (sha1_file_open_flag & O_NOATIME)) { > >> + sha1_file_open_flag &= ~O_NOATIME; > >> + continue; > >> + } > > > > We drop O_NOATIME, and end up with an empty flag field. > > > > But we will never have tried just O_CLOEXEC, which might have worked. > > Yes, doing so would smudge atime, so one question is which one > between noatime or cloexec is more important to be done at open(2) > time. Yes, but the missing case is one where we know that O_NOATIME does not work (but O_CLOEXEC does), so we know we have to smudge the atime. Of the two flags, I would say CLOEXEC is the more important one to respect because it may actually impact correctness (e.g., leaking descriptors to sub-processes). Whereas O_NOATIME is purely a performance optimization. I actually wonder if it is worth carrying around the O_NOATIME hack at all. Linus added it on 2005-04-23 via 144bde78e9; the aim was to reduce the cost of opening loose object files. Some things have changed since then: 1. In June 2005, git learned about packfiles, which means we would do a lot fewer atime updates (rather than one per object access, we'd generally get one per packfile). 2. In late 2006, Linux learned about "relatime", which is generally the default on modern installs. So performance around atime updates is a non-issue there these days. All the world isn't Linux, of course, but I can't help that feel that atime performance hackery is something that belongs at the system level, not in individual applications. So I don't have hard numbers, but I'd be surprised if O_NOATIME is really buying us anything these days. -Peff