Re: Why the normative form of IETF Standards is ASCII

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Masataka Ohta <mohta at necom830 dot hpcl dot titech dot ac dot jp> wrote:

As many Japanese type Yen sign, when he actually want to input back slash, the JIS character of Yen sign is converted to unicode character of Yen sign, which is not back slash, which was the intention.

I think this means that the user's kludge, in typing a yen sign to get a backslash, is not matched by Unicode with an equal and opposite kludge of converting the yen sign back to a backslash. I guess in the 1960s one could consider this a fault.

That is simply a reality though it does not match your opinion.

What opinion? That it's not a fault that Unicode assigns only one character to each code point?

It should also be noted that, in Japanese encoding of JIS C 6226, back slash and Yen sign has been separateds already in 1978, which means unicode adds nothing.

If that if the case, why do users continue to enter one character and expect it to be converted to another?

Why don't we ask one of the scores of software vendors that have deployed Unicode, at least as "fully" as this thread is about, just how "disastrous" their experience has been and how much better things would be if they had stuck with ISO 2022 instead?

See above.

See WHAT above? I have quoted the entire text to which you responded. Neither you nor I wrote anything "above" about whether vendors have had a "disastrous" experience with Unicode that would have been better with ISO 2022. I appreciate your penchant for brevity, but this made no sense.

Many Kanji characters in JIS are displayed with Japanese font while many other Kanji characters not in JIS are some Chinese font, because of lack of information of unicode, which has been obvious long before I wrote 1815.

Is it your opinion that inadequate font coverage is the fault of the character encoding?

It's an explanation on the reality visible to us Japanese.

Inadequate font coverage is not the fault of the character encoding.

As a side note, if the complaint is that "Kanji characters not in JIS" are displayed in the wrong font, then how does it help to use ISO-2022-JP, where those characters cannot be represented at all, or full ISO 2022, in which the switch to another national character set would trigger a font change anyway?

I do find this difficult to understand.

It merely means that you don't have enough expertise to discuss Japanese and Chinese characters.

That's fine, as long as you don't discuss Japanese and Chinese characters.

Again, the argument is that if I disagree with you, it must be due to my ignorance.

But I will point out that I didn't start this discussion from the standpoint of Japanese vs. Chinese, and I have not attempted to rely on my own knowledge of Japanese or Chinese, but instead have relied on the experts within (and contributing to) UTC and WG2 who have said repeatedly that the basic identity of a Han/Kanji/Hanja character is independent of language, and that the end user's choice of fonts is paramount for correct styling.

A charset description should be understandable by those who can use it, but not necessarilly beyond.

You and I have a fundamental disagreement here which cannot be resolved. You are saying that I should not be able to see or use characters used only in languages that I do not understand. I claim that a universal character encoding is beneficial to all, and that it is my problem if I don't have adequate fonts or knowledge to read the text.

--
Doug Ewell  |  Thornton, Colorado, USA  |  http://www.ewellic.org
RFC 5645, 4645, UTN #14  |  ietf-languages @ http://is.gd/2kf0s ­

_______________________________________________
Ietf mailing list
Ietf@xxxxxxxx
https://www.ietf.org/mailman/listinfo/ietf


[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]