On Mon, 14 Sep 2015 01:32:50 +0100 Ben Hutchings <ben@xxxxxxxxxxxxxxx> wrote: > > I guess I have to ask, though: doesn't it seem that having the docs > > produced according to the current locale is the Right Thing to do? Users > > have their locale set as it is for a reason, it seems like the production > > of textual documents should respect their choice. > > > > Am I missing something here? > > Yes - the locale's character encoding applies to plain text, but rich > text formats can have a locale-independent encoding which the viewer > will automatically to the current locale's encoding. > > For HTML, the document encoding can be explicit in the document header > (and is, in this case). > > Manual pages were already consistently encoded in UTF-8, as this is the > default behaviour of DocBook-XSL (and is what man-db prefers as input). > > PDF and Postscript documents have arbitrary and explicit mappings from > character numbers (or names) to glyphs, and PDF documents normally have > a mapping from glyphs back to Unicode code points to support searching > and copying text. OK, I guess you've talked me into it. Can I ask you for one last favor, though: please resubmit this patch with a couple of tweaks: - Based off current mainline, please (or docs-next, but that shouldn't be necessary). The patch as sent doesn't apply. - Could you add a comment to the check-lc_ctype proglet so that somebody stumbling across it in the scripts directory knows why it's there? Thanks, jon -- To unsubscribe from this list: send the line "unsubscribe linux-kbuild" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html