Apologies for the slow reply. > -----Original Message----- > From: Måns Rullgård [mailto:mans@xxxxxxxxx] > Sent: 05 February 2015 12:56 > To: Maciej W. Rozycki > Cc: Toma Tabacu; Daniel Sanders; Ralf Baechle; Paul Burton; Paul Bolle; > Steven J. Hill; Manuel Lauss; Jim Quinlan; linux-mips@xxxxxxxxxxxxxx; linux- > kernel@xxxxxxxxxxxxxxx > Subject: Re: [PATCH 5/5] MIPS: LLVMLinux: Silence unicode warnings when > preprocessing assembly. > > "Maciej W. Rozycki" <macro@xxxxxxxxxxxxxx> writes: > > > On Thu, 5 Feb 2015, Toma Tabacu wrote: > > > >> > 2. It considers these character pairs to be unicode escapes in the first > >> > place given that they do not follow the syntax required for such > >> > escapes, that is `\unnnn', where `n' are hex digits. > >> > > >> > >> It doesn't actually treat them as unicode escapes, but it still warns > >> the user, in case they were meant to be unicode escapes. Here's the > >> warning message: > >> > >> arch/mips/include/asm/asmmacro.h:197:51: warning: \u used with no > following hex digits; treating as '\' followed by identifier [-Wunicode] > >> .word 0x41000000 | (\rt << 16) | (\rd << 11) | (\u << 5) | (\sel) > >> ^ > >> I'll add it to the summary in v2. > > > > Thanks, that makes things clearer. It always makes sense to include the > > exact error message produced where applicable or otherwise people do not > > necessarily know what the matter is. > > > >> > Of course it may be reasonable for us to work this bug around as we've > >> > been doing for years with GCC, but has the issue been reported back to > >> > clang maintainers? What was their response? > >> > > >> > >> It hasn't been reported, but I don't think they would agree with removing > >> unicode escape sequences from the assembler-with-cpp mode because it is > >> currently being used for other languages as well, not just assembly. > > > > First, preprocessing rules surely have to be language specific. The C > > language standard does not specify what the preprocessor is meant to do > > (if anything) for other languages. GCC or clang -- that's no different. > > > > The assembly language has a different syntax and `\u' has a different > > meaning in the context of assembly macro expansion than it would have in a > > name of a symbol, where such a Unicode escape sequence might indeed be > > interpreted as such and character encoded propagated to the symbol > > produced. But that's up to the assembler -- GAS for example does not > > AFAIK support Unicode escape sequences in symbol names right now, but I > > suppose such a feature could be added if desired. Pre-processed assembly is somewhat unusual in that it has traditionally been pre-processed with a pre-processor designed for the C language. It's certainly possible to have assembly specific tweaks (GCC has a couple) but it is still a C pre-processor at heart. It doesn't know anything about the assembly language, it just happens to be similar enough to be usable. >From the pre-processors point of view, '\u' is two pre-processor tokens '\' and the identifier 'u'. However, with following hex digits it would have been an identifier starting with a universal character name. Clang's warning is effectively saying that the former is more likely to be the intention. That's probably not as true for pre-processed assembly as it is for C/C++. > > Which prompts another question of course: how does the clang C compiler > > represent Unicode characters in identifiers in its assembly output? They're emitted as multi-byte characters. > > I have looked into the C language standard and it appears to me like the > > translation phase to interpret universal character names at has not been > > defined. This is probably why the standard does specify the result of > > pasting preprocessor tokens together as undefined if a universal character > > name is produced this way. > > That is my interpretation as well. It's my understanding that they should be interpreted when pre-processing tokens are formed. This is based on the fact that the universal character names are included in the grammar for identifiers and are not discussed in a separate translation phase. I agree that it doesn't explicitly state that though. > > Consequently I think an important question in this context is: does > > clang's preprocessor actually convert these sequences anyhow before > > passing them down to the compiler? How for example does C output from a > > trivial example that contains such Unicode escape sequences look like > > then? Clang is converting them to multibyte characters during pre-processing. > >> One such language is Haskell (ghc, to be more specific), for which > >> the clang developers had to actually stop the preprocessor from > >> enforcing the C universal character name restrictions in > >> assembler-with-cpp mode, which suggests that ghc wants the > >> preprocessor to check for unicode escape sequences. > >> > >> At the moment, we can either disable -Wunicode for asmmacro.h or > >> refrain from using '\u' as an identifier. > > > > To be clear: it's `u' here that is the identifier, the leading `\' is > > merely how assembly syntax has been specified for references to macro > > arguments. And TBH I find banning any macro arguments starting with `u' > > rather silly. > > Agreed. That's the crux of the issue. Had it been followed by some hex-digits, it would be an identifier '\u1234' and not a '\' followed by the identifier 'u'. Clang currently thinks the former is more likely and warns. I do agree that warning about all macro arguments beginning with 'u' is silly though. Perhaps for assembler-with-cpp mode the warning should be suppressed when it's the first character of an identifier. > > I'm leaning towards considering having -Wunicode disabled for all > > assembly sources, or maybe even for the whole Linux compilation, the > > right solution. It's not like we have a need for Unicode identifiers. > > It might be an idea to disable -Wunicode and have checkpatch warn about > Unicode escapes instead if people are worried about this. Personally, I > doubt there's much cause for concern here. > > -- > Måns Rullgård > mans@xxxxxxxxx I'm fine with disabling -Wunicode if that's our preferred solution.