On 10/12/06, Junio C Hamano <junkio@xxxxxxx> wrote:
apodtele <apodtele@xxxxxxxxx> writes: > Instead of conditionally scaling the stat graph for large changes, > always scale it asymptotically: small changes shall appear without any > distortions. I am not sure if any non-linear scaling is worth pursuing. Suppose your change set has three files modified: A adds 20 lines, deletes 10 lines B adds 10 lines, deletes 20 lines C adds 30 lines, deletes 30 lines For obvious reasons, the total length of A and B exceeds half of C, which looks quite misleading. A | ++++++++++++-------- B | ++++++++------------ C | +++++++++++++++---------------
Before my patch is completely forgotten, let me critique the current approach. Currently everything is great and beautiful unless one particular change adds a couple of hundred lines, say, to a man page. With large changes in play, small changes are squashed to a single character. Would you argue that this scenario correctly represent importance of man pages? Would you say, that it's not misleading that 1-, 2-, and 5-liners all look the same as long as a man page is prominently shown? Moreover, 1-, 2-, and 5- liners may look different depending on the size of that man page. The current approach is not invariant; it is, however, normalized as needed. "Normalized" is good, "as needed" is bad. With asymptotic scaling, 1-, 2-, and 5- liners are correctly represented by a correct number of characters, regardless of the size of that man page. 10- and 20- liners are _slightly_ distorted. I cannot stress it more: the representation will not depend on the size of changes in other files! You will be able to tell where truly large changes happened too! The price for this is that you won't be able to precisely compare the sizes of added man pages. It is your choice... - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html