Re: What's cooking in git.git (Apr 2017, #04; Wed, 19)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Apr 24, 2017 at 4:19 PM, Johannes Schindelin
<Johannes.Schindelin@xxxxxx> wrote:
> Hi Junio,
>
> On Sun, 23 Apr 2017, Junio C Hamano wrote:
>
>> Johannes Schindelin <Johannes.Schindelin@xxxxxx> writes:
>>
>> > Part of the reason is that you push out all of the branches in one go,
>> > typically at the very end of your work day. The idea of Continuous
>> > Integration is a little orthogonal to that style, suggesting to build
>> > & test whenever new changes come into the integration branch.
>> >
>> > As a consequence, my original setup was a little overloaded: the VM
>> > sat idle most of the time, and when you pushed, it was overloaded.
>>
>> I do not see pushing out all them in one go is making the problem worse
>> for you, though.
>
> Oh no, you don't see that? Then let me spell it out a little more
> clearly: when you push out four branches at the same time, the same
> Virtual Machine that hosts all of the build agents has to build each and
> everyone of them, then run the entire test suite.
>
> As I have pointed out at several occasions (but I was probably complaining
> too much about it, so you probably ignored it), the test suite uses shell
> scripting a lot, and as a consequence it is really, really slow on
> Windows. Meaning that even on a high-end VM, it typically takes 1.5 hours
> to run the test suite. That's without SVN tests.
>
> So now we have up to four build agents banging at the same CPU and RAM,
> competing for resources. Now it takes more like 2-3 hours to run the
> entire build & test.
>
> The situation usually gets a little worse, even: you sometimes push out
> several iterations of `pu` in relatively rapid succession, "rapid" being
> relative to the time taken by the builds.
>
> That means that there are sometimes four jobs still hogging the VM when
> the next request to build & test `pu` arrives, and sometimes there is
> another one queued before the first job finishes.
>
> Naturally, the last two jobs will have started barely before Travis
> decides that it waited long enough (3 hours) to call it quits.
>
> To answer your implied question: the situation would be much, much better
> if the branches with more time in-between.
>
> But as I said, I understand that it would be asking you way too much to
> change your process that seems to work well for you.

Is getting the results of these builds time-critical? If not perhaps
an acceptable solution would be to use a source repo that's
time-delayed, e.g. 24hrs behind on average from Junio's git.git, and
where commits are pushed in at some configurable trickle.

>> As of this writing, master..pu counts 60+ first-parent merges.
>> Instead of pushing out the final one at the end of the day, I could
>> push out after every merge.  Behind the scenes, because some topics
>> are extended or tweaked while I read the list discussion, the number
>> of merges I am doing during a day is about twice or more than that
>> before I reach the final version for the day.
>>
>> Many issues can be noticed locally even before the patches hit a
>> topic, before the topic gets merged to 'pu', or before the tentative
>> 'pu' is pushed out, and breakage at each of these points can be
>> locally corrected without bothering external test setups.  I've been
>> assuming that pushing out all in one go at the end will help
>> reducing the load at external test setups.
>
> Pushing out only four updates at the end of the day is probably better
> than pushing after every merge, for sure.
>
> Ciao,
> Dscho



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]