Hi Phillip, On Fri, 19 Nov 2021, Phillip Wood wrote: > On 18/11/2021 15:42, Jeff King wrote: > > On Thu, Nov 18, 2021 at 04:35:48PM +0100, Johannes Schindelin wrote: > > > > > I think the really important thing to point out is that > > > `xdl_classify_record()` ensures that the `ha` attribute is different > > > for different text. AFAIR it even "linearizes" the `ha` values, i.e. > > > they won't be all over the place but start at 0 (or 1). > > > > > > So no, I'm not worried about collisions. That would be a bug in > > > `xdl_classify_record()` and I think we would have caught this bug by > > > now. > > > > Ah, thanks for that explanation. That addresses my collision concern > > from earlier in the thread completely. > > Yes, thanks for clarifying I should have been clearer in my reply to > Stolee. The reason I was waffling on about file sizes is that there can > only be a collision if there are more than 2^32 unique lines. I think > the minimum file size where that happens is just below 10GB when one > side of the diff has 2^31 lines and the other has 2^31 + 1 lines and all > the lines are unique. Indeed, and as you pointed out, we already refuse to generate diffs for such large amounts of data. (For what it's worth, I totally agree with punting on such large data, it would also take too long a time to generate diffs on such large data to be reasonable.) Ciao, Dscho