On Wed, 16 Jan 2008, Kevin Ballard wrote: > > My understanding is that normalization is there to help the computer. That > doesn't give it any semantic meaning, because all normal forms of a given > string still represent the exact same string to the user. THAT IS NOT TRUE! How the hell does the computer know what the string means? Hint: it does not. The fact is, the user may use a non-normalized string on purpose. It's not your place to say that the user is wrong. Your "undestanding" is simply wrong. Two strings are *different* if they are [un]normalized differently. Really. The exact same way the word Polish and polish are different, just because they are capitalized differently. > The argument for case insensitivity is different than the argument for > normalization. I certainly hope you understand why they are different > arguments, or there's really no point in going further. You do not understand. In *order* to do case-insensitivity, you generally need to normalize (and do other things too - normalization is just *one* of the things you need to do). So if you are a case-insensitive filesystem, then normalization is sane. But if you aren't, then there is no reason to normalize. > You're right, sometimes the sequence matters. As in key sequences. But we're > not talking about key sequences, we're talking about strings. You define "string" to be something totally made-up. In your world "string" means "normalized". BUT IT'S NOT TRUE! You define normalization to be a property of strings, without any actual backing for why that would be. The fact is, *looks the same* is very very different from *is the same*. But you seem to be too stupid to undestand the differce. Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html