Ivan Baldo <ibaldo@xxxxxxxxx> writes: > I know this is not standard usage of git, but I need a way to have > more stable dates and times in the files in order to avoid rsync > checksumming. Would you care to elaborate a bit more about the use case? From what you wrote, I would assume: - The source of the rsync transfer is a git working tree. It often has the checkout of the latest and greatest version, but during development, it may switch to older commit (e.g. to find where regression occurred) or not-yet-ready commit (e.g. work in progress that is not given to upstream). You check out the version you want to sync to the destination before initiating rsync. - The destination of the rsync transfer is meant to serve as a back-up of the latest and greatest, periodical snapshot of a branch, etc., which is NOT controlled by git and transfer does not happen in the reverse direction [*2*] Because the working tree of the source repository is used to check out different versions between rsync sessions, files that did not change between the commit you sync'ed to destination the last time and the commit you are about to sync still may have been touched and have different timestamp, requiring rsync to check the contents. And as a workaround, you are willing to change the workflow to "touch" the working tree files, immediately before you run the next rsync, in a predictable way so that the timestamp of a file whose contents did not change since the last rsync session would have the same timestamp. This may break your build next time you run "make" in the source working tree (because your object files that are excluded from your rsync may have newer timestamp than the corresponding source even when they must be recompiled due to your "touch"ing), but you are willing to pay the cost of say "make clean" after "touch"ing. Is that the kind of use case you have around "rsync"? To the question "what is the time this file was last modified?", there is no simple and cheap answer that is easy to explain to end-users, unless your development is completely linear [*2*]. The loop you showed would be the right one in a linear history, and with recent development to record which paths were changed in each commit in the commit-graph data structure, the script should work a lot faster than traditional git. [Footnote] *1* Otherwise, you'd be just mirror-fetching from the source repository. If that can be arranged, running "git pull --ff-only" on the destination side to update from the source side would be a lot more efficient than running rsync, I would imagine. *2* In a history with merges, because two or more branches can touch the file in parallel development at different times and then the resulting parallel histories get merged into a single history. When two or more of these parallel history gave the file in question an identical content at different times, and the merge result was recorded as the same content, you'd need to follow ALL the paths and compare the timestamp of these commits to pick one (which one? the oldest one? the newest one? does the order of parents in the merge matter?)