Thanks. Looking at some other cases after applying your patch, I noticed that I really like one thing that your version does over what RCS merge does. With RCS merge, a run of lines that are modified the same way in both branches appear twice, like this: <<< orig alpha bravo charlie ... x-ray yankee zulu === alpha bravo charlie ... x-ray yankee zebra >>> new The common part at the beginning (or at the end for that matter) can be hoisted outside, to produce: alpha bravo charlie ... x-ray yankee <<< orig zulu === zebra >>> new and your version seems to get this right. When I had to deal with this kind of conflicts, I ended up splitting the buffer in two, and ran M-x compare-windows to find the true differences between the choices. It was frustrating. (I admit a big reason is I do not normally work in X environment and do not tend to use xdiff -U or Kompare). This is especially noticeable when recreating diff-delta.c merge conflict in commit b485db98. It's fun to see this large hunk reduced down to only two lines ;-). <<<<<<< HEAD/diff-delta.c /* * Determine a limit on the number of entries in the same hash * bucket. This guard us against patological data sets causing * really bad hash distribution with most entries in the same hash * bucket that would bring us to O(m*n) computing costs (m and n * corresponding to reference and target buffer sizes). * * The more the target buffer is large, the more it is important to * have small entry lists for each hash buckets. With such a limit * the cost is bounded to something more like O(m+n). */ hlimit = (1 << 26) / trg_bufsize; if (hlimit < 16) hlimit = 16; /* * Now make sure none of the hash buckets has more entries than * we're willing to test. Otherwise we short-circuit the entry * list uniformly to still preserve a good repartition across * the reference buffer. */ for (i = 0; i < hsize; i++) { if (hash_count[i] < hlimit) continue; entry = hash[i]; do { struct index *keep = entry; int skip = hash_count[i] / hlimit / 2; do { entry = entry->next; } while(--skip && entry); keep->next = entry; } while(entry); } free(hash_count); return hash; ======= /* * Determine a limit on the number of entries in the same hash * bucket. This guard us against patological data sets causing * really bad hash distribution with most entries in the same hash * bucket that would bring us to O(m*n) computing costs (m and n * corresponding to reference and target buffer sizes). * * The more the target buffer is large, the more it is important to * have small entry lists for each hash buckets. With such a limit * the cost is bounded to something more like O(m+n). */ hlimit = (1 << 26) / trg_bufsize; if (hlimit < 16) hlimit = 16; /* * Now make sure none of the hash buckets has more entries than * we're willing to test. Otherwise we short-circuit the entry * list uniformly to still preserve a good repartition across * the reference buffer. */ for (i = 0; i < hsize; i++) { if (hash_count[i] < hlimit) continue; entry = hash[i]; do { struct index *keep = entry; int skip = hash_count[i] / hlimit / 2; do { entry = entry->next; } while(--skip && entry); keep->next = entry; } while(entry); } free(hash_count); return hash-1; >>>>>>> 38fd0721d0a2a1a723bc28fc0817e3571987b1ef/diff-delta.c - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html