Re: [PATCH v2 1/3] patch-id: make it stable against hunk reordering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Junio C Hamano <gitster@xxxxxxxxx> writes:

>> @@ -99,6 +116,18 @@ static int get_one_patchid(unsigned char *next_sha1, git_SHA_CTX *ctx, struct st
>>  			if (!memcmp(line, "@@ -", 4)) {
>>  				/* Parse next hunk, but ignore line numbers.  */
>>  				scan_hunk_header(line, &before, &after);
>> +				if (stable) {
>> +					if (hunks) {
>> +						flush_one_hunk(result, &ctx);
>> +						memcpy(&ctx, &header_ctx,
>> +						       sizeof ctx);
>> +					} else {
>> +						/* Save ctx for next hunk.  */
>> +						memcpy(&header_ctx, &ctx,
>> +						       sizeof ctx);
>> +					}
>> +				}
>> +				hunks++;
>>  				continue;
>>  			}
>>  
>> @@ -107,7 +136,10 @@ static int get_one_patchid(unsigned char *next_sha1, git_SHA_CTX *ctx, struct st
>>  				break;
>>  
>>  			/* Else we're parsing another header.  */
>> +			if (stable && hunks)
>> +				flush_one_hunk(result, &ctx);
>>  			before = after = -1;
>> +			hunks = 0;
>>  		}
>>  
>>  		/* If we get here, we're inside a hunk.  */
>> @@ -119,39 +151,46 @@ static int get_one_patchid(unsigned char *next_sha1, git_SHA_CTX *ctx, struct st
>>  		/* Compute the sha without whitespace */
>>  		len = remove_space(line);
>>  		patchlen += len;
>> -		git_SHA1_Update(ctx, line, len);
>> +		git_SHA1_Update(&ctx, line, len);
>>  	}
>>  
>>  	if (!found_next)
>>  		hashclr(next_sha1);
>>  
>> +	flush_one_hunk(result, &ctx);
>
> What I read from these changes is that you do not do anything
> special about the per-file header, so two no overlapping patches
> with a single hunk each that touches the same path concatenated
> together would not result in the same patch-id as a single-patch
> that has the same two hunks.  Which would break your earlier 'Yes,
> reordering only the hunks will not make sense, but before each hunk
> you could insert the same "diff --git a/... b/..." to make them a
> concatenation of patches that touch the same file', I would think.
>
> Is that what we want to happen?  Or is my reading mistaken?

Heh, I was blind---copying into header_ctx and then restoring that
in preparation for the next round is exactly for duplicating the
per-file header sum to each and every hunk, so you are already doing
that.

--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]