Hi ossi, Thanks for the response. > i would recommend taking a step back and considering whether you're > actually trying to fix the right problem. > > why are you checking in an auto-generated file, esp. given that it can > be generated very quickly as you report? > > usually, this should be done by the build system. Thanks for asking about this and forcing me to think more on the point. To add a bit more context, and explanation behind the current design decisions. The system I have described is a pipeline that supports a documentation site for https://cuelang.org/. The architecture I have described is that of the preprocessor, a tool which helps to automate the testing of examples in documentation. Content authors write documentation in format X, the preprocessor validates (and runs, as required) that content, and produces format Y. Format Y is the input to a Hugo (https://gohugo.io/) static site; hugo processes format Y to produce format Z, the HTML that then renders the site. The generation of hashes that I referred to before relates to the contents of format X. If when the preprocessor runs it detects (according to the cache files commited with the content) a cache hit, then there is no need to re-run an example in some documentation. We commit those hashes for now to sidestep needing to create and maintain a shared preprocessor cache (a cache that is shared between CI systems and users). We might move to a system like that in the future; for now this feels like a sufficient setup. The cache right now is very dumb; well-known files are updated with hash values. And this is what creates the git conflicts. One thing we could do to eliminate the conflicts altogether is commit a content-addressed cache. This would have the problem of growing over time... but I think we could solve that problem a different way. > if the used build tool really is too dumb to integrate it into the build > system, you might have luck with a post-checkout hook. > > you can also obtain existing hashes directly from git, with ls-tree, > though this would again require some kind of integration with the build > or checkout process. > > if you can't get around checking in the hash, i can think of hacking it > using rebase --exec. basically, before each pick you'd create a commit > that reverts the hash change (by checking out that path from the parent > of the last commit that touched it, found automatically with git log), > and after the pick you'd squash away the revert (using `reset HEAD~2 && > commit -C @{1}` or something to that effect). very ugly, very fragile. Thanks. I have a working setup now using a combination of git rebase -x and a script that I run whenever git rebase fails because of a conflict. This works but is not ideal for a couple of reasons: 1. Each interactive rebase is "littered" with exec lines which should be a detail 2. I need to re-run the script manually when conflicts are detected Point 1 would be nicely addressed by a git hook that fires "pre commit" during a rebase. Point 2 could be solved by a custom merge driver, but that's seemingly not possible right now: https://lore.kernel.org/git/ZHXFdRnrwzNCA227@ugly/T/#m14b204843fea1fe9ff1c7500244049a43ed610eb. Alternatively it could be solved by another hook that fires when rebase detects a conflict, a hook that attempts to "recover" the situation before rebase actually fails. Thanks again for asking about whether we are solving the right problem here. Writing my response above prompted me to think again about different solutions. Best, Paul