> There, and for the specific case of Unicode, probably we > disagree. Keep in mind that a number of contemporary operating > systems use UTF-16, or even UTF-32, with some byte ordering, > internally. They typically know they are doing that, if only to > be able to do an orderly conversation to UTF-8 for putting over > the wire or from UTF-8 or ASCII for incoming data. Generally, the Web (and the need for MIME sniffing) has shattered any hope I might have had there. The problem really is that the FTP server is disconnected from all the application knowledge that made the file happen. And that is very different from the situation… > […] we do not allow > text/plain charset="I don't have a clue what this is or how it > was encoded but I think it is text". And, if the originating > system knows enough to specify that a body part is text/plain > and to specify a charset, … where it is not the system, it is the application (mail client), often with some help of the user. E.g., we have much better metadata on clipboard data than on files. For files, the application can use heuristics whose failures at least *can* be corrected by someone sitting in front of the screen. (But then, even with that I still get tons of mojibake in e-mail.) Grüße, Carsten