Re: git on MacOSX and files with decomposed utf-8 file names

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Jan 21, 2008, at 5:41 PM, Dmitry Potapov wrote:

On Mon, Jan 21, 2008 at 04:07:27PM -0500, Kevin Ballard wrote:

Again, I've specified many times that I'm talking about canonical
equivalence.

And yes, HFS+ does normalization, it just doesn't use NFD. It uses a
custom variant. I fail to see how this is a problem.

If you think that HFS+ does normalization then you apparently have no
idea of what the term "normalization" means. Have you? But if you
don't know what is "normalization" then you cannot really know what
canonical equivalence means.

I would go look up specifics to back me up, but my DNS is screwing up right now so I can't access most of the internet. In any case, there are 4 standard normalization forms - NFC, NFD, NFKC, NFKD. If there are others, they aren't notable enough to be listed in the resource I was reading. HFS+ uses a variant on NFD - it's a well-defined variant, and thus can safely be called its own normalization form. I fail to see how this means it's not "normalization".

I don't say they do that without *any* reason, but I suppose all
Apple developers in the Copland project had some reasons for they
did, but the outcome was not very good...

Stupid engineers don't get to work on developing new filesystems.

Assigning someone to work on a new filesystem does not make him
suddenly smart. As to that stupid engineers don't get to work,
it is like saying there is no stupid engineers at all. There are
plenty evidence to contrary. And when management is disastrous
then most idiots with big mouth and little capacity to produce
any useful does get assignment to develop new features, while
those who can actually solve problems are assigned to fix the
next build, because the only thing that this management worries
about how to survive another year or another months...

I'm not talking about assigning engineers, I'm saying developing a new filesystem, especially one that's proven itself to be usable and extendable for the last decade, is something that only smart engineers would be capable of doing.

And
Copland didn't fail because of stupid engineers anyway. If I had to
blame someone, I'd blame management.

But if the code was so good then why was most of that code thrown away
later when management was changed? Still bad management?

Yes. Even the best of engineers will produce crap code when overworked and required to implement new features instead of fixing bugs and stabilizing the system. Copland is well-known to have suffered from featuritis, to the extent that it was practically impossible to test in any sane fashion. Bad management can kill any project regardless of how good the engineers are.

The only information you lose when doing canonical normalization is
what the original byte sequence was.

Not true. You lose the original sequence of *characters*.

Which is only a problem if you care about the byte sequence, which is
kinda the whole point of my argument.

Byte sequences are not an issue here. If the filesystem used UTF-16 to
store filenames, that would NOT cause this problem, because characters
would be the same even though bytes stored on the disk were different.
So, what you actually lose here is the original sequence of *characters*.

I've already talked about that, but you are apparently incapable of understanding.

-Kevin Ballard

--
Kevin Ballard
http://kevin.sb.org
kevin@xxxxxx
http://www.tildesoft.com


<<attachment: smime.p7s>>


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux