On Sat, Mar 29, 2008 at 10:43:40PM +0100, Robin Rosenberg wrote: > First that is even by random an unlikely sequence. For any "real" is string > it simply won't happen, even in this context. Try scanning everything you > can think of and see if you find such a sequence that is not actually UTF-8. That's the problem I was mentioning: "everything I can think of" is basically just us-ascii with a few accented characters. I don't know how, e.g., Japanese texts will fare with such a test. > > But over all commonly used encodings, what is the probability in an > > average text of that encoding that it contains valid UTF-8? > > For example, I have no idea what patterns can be found in EUCJP. > > See here http://www.ifi.unizh.ch/mml/mduerst/papers/PDF/IUC11-UTF-8.pdf Thanks, that is an interesting read. And he seems to indicate that you can guess with a reasonable degree of success. But a few points on that work: - he has a specific methodology for guessing, which is more elaborate than what you proposed. So to get his results, you would need to implement his method. Hopefully if perl does have a "guess if this looks like utf8" method, it uses a similar scheme. - he does admit that some encodings have difficult to assess probabilities, and it will vary from language to language. See page 22: If a specific language does not use all three letters (a single letter on the left and the corresponding two letters on the right), then this combination presents no danger. Further checks can then be made with a dictionary, although there is the problem that a dictionary never contains all possible words, and that of course resource names don't necessarily have to be words. - he mentions Latin, Cyrillic, and Hebrew encodings. I note the conspicuous absence of any Asian languages. > Note that a random string is a randomly generated string. Not a random > string from the set of actually existing strings. Sure. But looking at random strings isn't terribly useful; there is a non-uniform distribution over the set of strings, dependent on the _actual_ encoding. So there are going to be "good" encodings that will guess well, and there will be "bad" encodings that might not (and by "will", I mean "there may be"; that is the very thing I am saying we don't have good evidence for). -Peff -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html