Re: Windows support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



"Nguyen Thai Ngoc Duy" <pclouds@xxxxxxxxx> writes:

> On 7/26/07, Johannes Schindelin <Johannes.Schindelin@xxxxxx> wrote:
>> Hi,
>>
>> On Thu, 26 Jul 2007, Nguyen Thai Ngoc Duy wrote:
>>
>> > I make MinGW busybox part of git for some reasons:
>> >
>> > - Making a full MinGW busybox would take lots of time. I don't need
>> > busybox for Windows. What I need is a shell and enough POSIX utilities
>> > to run git shell scripts without any dependencies. Windows users
>> > (including myself when I have to use Windows) hate dependencies.
>>
>> I think that if you succeed to compile ash on MinGW, the rest is easy.
>
> No it's not. With a couple of ifdefs you can compile it fine. Then
> there goes fork(), fcntl(F_DUPFD), /dev/*, job/signal handling...
> Fortunately Git does not use lots of features. It only needs
> /dev/null (and /dev/zero for tests), SIGEXIT and no job usage.. That
> cuts down the effort porting ash.

And here I was tempted to multithread builtin-update-index.c: it is
actually quite natural to let one process scan directories
non-recursively, stat the files, sort them on a per-directory grain
and feed a sorted pseudo-index into a pipeline (recursing to scanning
whenever hitting a directory), then let another process/thread do a
merge-pass of pseudo-index and real index, immediately writing the
output to a new index-to-be.  When this is finished and another
process invalidated the old index already, reuse the index-to-be as
pseudo-index and merge it with the new-index-which-got-in-ahead-of-me.

Would be a fun exercise in particular when merely using
(block-buffered!) pipes, and could presumably make a difference on
multiprocessor-capable machines.

Anyway, just something that had been spinning in my head.  The
"streaming merge" idea has the advantage of keeping memory usage low
pretty much independently of project size: project memory is pretty
much determined by the reader pass since it has to read in a complete
directory level before it can sort it and output the next element, and
it has to retain the not-yet-output elements of the ancestry.

And it is nice to have some potential for parallel processing.  But if
it is a lethal stumbling block for Windows...  It is conceivable to do
the same job instead of with pipes and files with buffers and just
switch manually between the directory scanning and merging phases.
But it would be less fun.

-- 
David Kastrup, Kriemhildstr. 15, 44793 Bochum
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux