Re: [PATCH] gitweb: Measure offsets against UTF-8 flagged string

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Junio C Hamano <gitster@xxxxxxxxx> writes:
> Shin Kojima <shin@xxxxxxxxxx> writes:
>
>> Offset positions should not be counted by byte length, but by actual
>> character length.
>> ...
>>  # escape tabs (convert tabs to spaces)
>>  sub untabify {
>> -	my $line = shift;
>> +	my $line = to_utf8(shift);
>>  
>>  	while ((my $pos = index($line, "\t")) != -1) {
>>  		if (my $count = (8 - ($pos % 8))) {
>
> Some codepaths in the resulting codeflow look even hackier than they
> already are.  For example, format_rem_add_lines_pair() calls
> untabify() and then feeds its result to esc_html().  The first thing
> done in esc_html() is to call to_utf8().  I know that to_utf8()
> cheats and leaves the incoming string as-is if it is already UTF-8,
> so this may be a safe conversion, but ideally we should be able to
> say "function X takes non-UTF8 and works on it", "function Y takes
> UTF8 and works on it", and "function Z takes non-UTF8 and gives UTF8
> data back" for each functions clearly, not "function W can take
> either UTF8 or any other garbage and tries to return UTF8".

The problem with handling encoding in sane way, that is encode it on
input (to UTF-8), and decode on output (to plain text or HTML) is the
$fallback_encoding.

Gitweb assumes that everything uses UTF-8 encoding.  If the source is
not in UTF-8, but for example uses latin-1 encoding, then there we could
stumble upon byte sequences which are not valid UTF-8.  If that happens,
then gitweb tries to convert it to UTF-8 using $fallback_encoding.  If
$fallback_encoding is single-byte encoding, like latin-1, where any byte
sequence is valid, then that's all.  If there is an error during
conversion to UTF-8, then Unicode REPLACEMENT CHARACTER, code point
U+FFFD, is used.

But there are places where gitweb outputs plain text; the intention is
to use source data as is - to have it as one would have in the console.
Some input paths are common for plain text and HTML output; because of
that problem the data is not converted to UTF-8 on input.


The to_utf8() function tries to be clever, and do not convert alredy
converted data.

> Also, does it even "fix" the problem to use to_utf8() here in the
> first place?  Untabify is about aligning the character after a HT to
> multiple of 8 display position, so we'd want measure display width,
> which is not the same as either byte count or char count.

I think the problem is not with aligning, otherwise we would simply get
bad aling, and not visible corruption.  The ACTUAL PROBLEM is most
probably because of concatenating strings marked as UTF-8 and strings
not marked as UTF-8.  Strange things happen then in Perl, unfortunetaly.


One solution would be to force conversion to UTF-8 on input via "open"
pragma (e.g. "use open ':encoding(UTF-8)';").  But there is no
UTF-8-with_fallback encoding available - we would have to write one, and
install it as module (or fake it via Perl trickery).  This mechanism is
almost the same to what we currently use in gitwbe.

Another would be to use the trick that Perl 6 uses when encountering
byte sequence that is invalid UTF-8 - encode it using private plane in
Unicode using UTF-8, thus achieving lossless conversion / decoding.  But
this also as far as I know is not available from CPAN, so we would have
to implement it ourself.

Best,
-- 
Jakub Narębski




[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux