Re: Switching from CVS to GIT

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

[by explicit request culling make-w32 from the Cc list]

On Tue, 16 Oct 2007, Eli Zaretskii wrote:

> > Date: Tue, 16 Oct 2007 07:14:56 +0200
> > From: Andreas Ericsson <ae@xxxxxx>
> > CC: Daniel Barkalow <barkalow@xxxxxxxxxxxx>,  raa.lkml@xxxxxxxxx, 
> >  Johannes.Schindelin@xxxxxx,  tsuna@xxxxxxxxxxxxx,  git@xxxxxxxxxxxxxxx, 
> >  make-w32@xxxxxxx
> > 
> > > Sorry I'm asking potentially stupid questions out of ignorance: why
> > > would you want readdir to return `README' when you have `readme'?
> > > 
> > 
> > Because it might have been checked in as README, and since git is case
> > sensitive that is what it'll think should be there when it reads the
> > directories. If it's not, users get to see
> > 
> > 	removed: README
> > 	untracked: readme
> 
> This is a non-issue, then: Windows filesystems are case-preserving, so 
> if `README' became `readme', someone deliberately renamed it, in which 
> case it's okay for git to react as above.

No, it is not.  On FAT filesystems, for example, I experienced Windows 
happily naming a file "head" which was created under then name "HEAD".

This is the single reason why I cannot have non-bare repositories on a USB 
stick.

> > could be an intentional rename, but we don't know for sure.
> 
> It _must_ have been an intentional rename.

No.  It can also be the output of a program which deletes the file first, 
and then (since the filesystem is so "conveniently" case insensitive) 
creates it again, with a lowercase filename.

And don't you tell me that there are no such programs.  I have to use 
them, and they are closed source.

Sigh.

> > To be honest though, there are so many places which do the 
> > readdir+stat that I don't think it'd be worth factoring it out
> 
> Something for Windows users to decide, I guess.  It's not hard to 
> refactor this, it just needs a motivated volunteer.

You?

> > I *think* (correct me if I'm wrong) that git is still faster
> > than a whole bunch of other scm's on windows, but to one who's used to
> > its performance on Linux that waiting several seconds to scan 10k files
> > just feels wrong.
> 
> Unless that 10K is a typo and you really meant 100K, I don't think 10K
> files should take several seconds to scan on Windows.  I just tried
> "find -print" on a directory with 32K files in 4K subdirectories, and
> it took 8 sec elapsed with a hot cache.  So 10K files should take at
> most 2 seconds, even without optimizing file traversal code.  Doing
> the same with native Windows system calls ("dir /s") brings that down
> to 4 seconds for 32K files.

On Linux, I would have hit Control-C already.  Such an operation typically 
takes less than 0.1 seconds.

> On the other hand, what packages have 100K files?

Mozilla, KDE, OpenOffice.org, X.org, ....

Ciao,
Dscho

-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux