Re: t0028-working-tree-encoding.sh failing on musl based systems (Alpine Linux)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 08, 2019 at 03:12:22PM -0800, Junio C Hamano wrote:
> "brian m. carlson" <sandals@xxxxxxxxxxxxxxxxxxxx> writes:
> 
> > +test_lazy_prereq NO_BOM '
> > +	printf abc | iconv -f UTF-8 -t UTF-16 &&
> > +	test $(wc -c) = 6
> > +'
> 
> This must be "just for illustration of idea" patch?  The pipeline
> goes to the standard output, and nobody feeds "wc".

Well, as I said, I have no way to test this. This code path works fine
on glibc because of course we would never exercise the prerequisite.

But I do appreciate you pointing it out. I'll fix it and send another
test patch.

> But I think I got the idea.
> 
> In the real implementation, it probably is a good idea to allow
> NO_BOM16 and NO_BOM32 be orthogonal.

Sure.

> > +
> > +write_utf16 () {
> > +	test_have_prereq NO_BOM && printf '\xfe\xff'
> > +	iconv -f UTF-8 -t UTF-16
> 
> This assumes "iconv -t UTF-16" on the platform gives little endian
> (with or without BOM), which may not be a good assumption.
> 
> If you are forcing the world to be where UTF-16 (no other
> specificaiton) means LE with BOM, then perhaps doing
> 
> 	printf '\xfe\xff'; iconv -f UTF-8 -t UTF-16LE
> 
> without any lazy prereq may be more explicit and in line with what
> you did in utf8.c::reencode_string_len() below.

No, I believe it assumes big-endian (BOM is U+FEFF). The only
justifiable argument for endianness that doesn't include a BOM is
big-endian, because that's the only one supported by the RFC and the
Unicode standard.

I can't actually write it without the prerequisite because I don't plan
to trigger the code I wrote below unless required. The endianness we
pick in the code and the tests has to be the same. If we're on glibc,
the code will use the glibc implementation, which is little-endian, and
therefore will need the tests to be little-endian as well.

I will explain this thoroughly in the commit message, because it is
indeed quite subtle.

> > diff --git a/utf8.c b/utf8.c
> > index 83824dc2f4..4aa69cd65b 100644
> > --- a/utf8.c
> > +++ b/utf8.c
> > @@ -568,6 +568,10 @@ char *reencode_string_len(const char *in, size_t insz,
> >  		bom_str = utf16_be_bom;
> >  		bom_len = sizeof(utf16_be_bom);
> >  		out_encoding = "UTF-16BE";
> > +	} else if (same_utf_encoding("UTF-16", out_encoding)) {
> > +		bom_str = utf16_le_bom;
> > +		bom_len = sizeof(utf16_le_bom);
> > +		out_encoding = "UTF-16LE";
> >  	}
> 
> I am not sure what is going on here.  When the caller asks for
> "UTF-16", we do not let the platform implementation of iconv() to
> pick one of the allowed ones (i.e. BE with BOM, LE with BOM, or BE
> without BOM) but instead force LE with BOM?

This is more of a test scenario to see how it works for musl. My
proposal is that if this is sufficient to fix problems for musl, then we
wrap it inside a #define (and Makefile) knob, and let users with an
iconv that doesn't write the BOM turn it on. Now that I'm thinking about
it, it will probably need to be big-endian for compatibility with the
tests.

I plan to treat this as a platform-specific wart much like
FREAD_READS_DIRECTORIES. Inspecting the stream would be complicated and
not performant, and I'm not aware of any other iconv implementations
that have this behavior (because it causes unhappiness with Windows,
which is the primary consumer of UTF-16), so I think a compile-time
option is the way to go.

I'll try to reroll with a formal test patch this evening or tomorrow.
-- 
brian m. carlson: Houston, Texas, US
OpenPGP: https://keybase.io/bk2204

Attachment: signature.asc
Description: PGP signature


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux