Hi folks, A user today showed me a situation where `git diff` (and `git blame`) seemed to be doing the wrong thing: where two big blocks of text were removed from a file, leaving 4 lines untouched in the middle, the default diff was noting all three regions as lines removed, with those 4 "untouched" lines as *added* in the same place. We compared to another diffing tool, p4merge, and that was showing "the right thing" - two deleted regions with untouched lines in the middle. We realized that `--minimal` does "the right thing" in git, and you can set up `diff.algorithm` config to use it by default in `git diff` (although `git blame` doesn't currently/yet support it... a small enhancement opportunity there :) ), but that raises two questions: 1. Is there any practical reason for any user *not* to set `diff.algorithm` to `minimal`? Has anyone ever done an analysis of the performance cost (or "diff readability cost", if that is a thing) of "minimal" vs "default"? 2. If "minimal" is just better, and its higher computational cost is effectively trivial, then why wouldn't we change the default? I suspect this comes down to situations where git does big diffs behind the scenes...? But I don't know offhand. Any feedback would be most appreciated! Thanks, Tao