On Thu, 17 Jan 2008, Mark Junker wrote: > > Sorry, but you're using different characters that look the same. But Kevins > point was that it's a different thing if you use two characters that look the > same or the same character with different encodings. But that's exactly the case he gave - 'ä' vs 'a¨' are exactly that: different strings (not even characters: the second is actually a multi-character) that just look the same. You try to twist the argument by just claiming that they are the same "character". They aren't, unless you *define* character to be the same as "glyph". Of course, if you claim that, then you can always support your argument, but I claim that is a bogus and incorrect axiom to start with! Too many people confuse "character" and "glyph". They are different. See, for example http://en.wikipedia.org/wiki/Unicode and notice the *many* places where they try to make that distinction between "character" and "glyph" clear (and also "code values", which are the actual bytes that encode a character). See also http://en.wikipedia.org/wiki/Unicode_normalization and realize that a Unicode sequence is a sequence of *characters* even if it is not normalized! Those things are still characters, when they are the "simpler" non-combined characters. You are trying to make a totally BOGUS argument, and you base it on the INCORRECT basis that the TWO characters 'a'+'¨' somehow aren't independent characters. They *are*. They are *different* characters from 'ä', even though they may be "Canonically equivalent" as a sequence. The fact is that "equivalent" does not mean "same". Why cannot people accept that? Linus - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html