On Tue, May 4, 2021 at 2:01 PM Andrew Oakley <andrew@xxxxxxxxxxxxx> wrote: > The key thing that I'm trying to point out here is that the encoding is > not necessarily consistent between different commits. The changes that > you have proposed force you to pick one encoding that will be used for > every commit. If it's wrong then data will be corrupted, and there is > no option provided to avoid that. The only way I can see to avoid this > issue is to not attempt to re-encode the data - just pass it directly > to git. No, my "fallbackEndocing" setting is just that... a fallback. My proposal *always* tries to decode in UTF-8 first! Only if that throws an exception will my "fallbackEncoding" come into play, and it only comes into play for the single changeset description that was invalid UTF-8. After that, subsequent descriptions will again be tried in UTF-8 first. The design of the UTF-8 format makes it very unlikely that non UTF-8 text will pass undetected through a UTF-8 decoder, so by attempting to decode in UTF-8 first, there is very little risk of a lossy conversion. As for passing data through "raw", that will *guarantee* bad encoding on any descriptions that are not UTF-8, because git will interpret the data as UTF-8 once it has been put into the commit (unless the encoding header is used, as you mentioned) . If that header is not used, and it was not in UTF-8 in Perforce, it has zero chance of being correct in git unless it is decoded. At least "fallbackEncoding" gives it SOME chance of decoding it correctly. > I think another way to solve the issue you have is the encoding header > on git commits. We can pass the bytes through git-p4 unmodified, but > mark the commit message as being encoded using something that isn't > UTF-8. That avoids any potentially lossy conversions when cloning the > repository, but should allow the data to be displayed correctly in git. Yes, that could be a solution. I will try that out. > > In any event, if you look at my patch (v6 is the latest... > > https://lore.kernel.org/git/20210429073905.837-1-tzadik.vanderhoof@xxxxxxxxx/ > > ), you will see I have written tests that pass under both Linux and > > Windows. (If you want to run them yourself, you need to base my patch > > off of "master", not "seen"). The tests make clear what the > > different behavior is and also show that p4d is not set to Unicode > > (since the tests do not change the default setting). > > I don't think the tests are doing anything interesting on Linux - you > stick valid UTF-8 in, and valid UTF-8 data comes out. Totally agree.... I only did that to get them to pass the Gitlab CI. I submitted an earlier patch that simply skipped the test file on Linux, but I got pushback on that, so I made them pass on Linux, even though they aren't useful. > I suspect the tests will fail on Windows if the relevant code page is set to a value > that you're not expecting. It depends. If the code page is set to UTF-8 (65001) I think the tests would still work, because as I said above, my code always tries to decode with UTF-8 first, no matter what the "fallbackEncoding" setting is. If the code page is set to something other than UTF-8 or the default, then one of my tests would fail, because it uses a hard-coded "fallbackEncoding" of "cp1252". But the code would work for the user! All they would need to do is set "fallbackEncoding" to the code page they're actually using, instead of "cp1252". (More sophisticated tests could be developed that explicitly set the code page and use the corresponding "fallbackEncoding" setting.) > For the purposes of writing tests that work the same everywhere we can > use `p4 submit -i`. The data written on stdin isn't reencoded, even on > Windows. I already have gone down the `p4 submit -i` road. It behaves exactly the same as passing the description on the command line. (One of several dead-ends I went down that I haven't mentioned in my emails)