Re: failed submodule update re-run results in no checked out files?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 17, 2016 at 3:54 PM, Jacob Keller <jacob.keller@xxxxxxxxx> wrote:
> Hi,
>
> I am having an issue currently when using Git with a remote server
> which has a limited number of ssh connections.
>
> The ssh server sometimes closes connections due to too many concurrent
> connections. I will get the following output from git in this case
> when performing a submodule update of a submodule which is not yet
> currently cloned/checked out.
>
> stdout: Cloning into 'src/SHARED'...

Which version of Git are you using?
(does it include origin/sb/submodule-parallel-update, which rewrites
lots of the relevant code? Also it introduces parallelism, which may not be
anticipated by the server admin)

>
> stderr: Total 10288 (delta 7577), reused 10190 (delta 7577)

Seeing both stdout and stderr, I assume you're on master or even behind that,
which doesn't include the rewrite.

> Received disconnect from 10.96.8.71: 7: Too many concurrent connections
> fatal: Could not read from remote repository.
>
> Please make sure you have the correct access rights
> and the repository exists.
> Unable to fetch in submodule path 'src/SHARED'
>
> The submodule is not cloned successfully, and this occurs somewhere in
> the middle of the process.

I wonder if the client should retry each submodule at least once, in case of
transient errors such as this ssh config having too many connections.

>
> If I run the command a 2nd time,
>
> git submodule update --remote src/SHARED,
>
> I get a successful run, but the files are not actually checked out.

The submodules are cloned via clone --no-checkout.
This is because you may have a custom update strategy configured,
(or a preset such as "none" which would clone but do nothing further)

> I
> believe this is because the clone that failed did succeed in getting
> the repository into a state where all the files are "removed" so a
> further submodule update will do nothing since it's "already" checked
> out at the correct commit.

you can pass --force into "git submodule update" which passes that
flag along to the checkout.

>
> Am I right in my understanding? Is this a bug? I believe I can fix
> this using --force.
>
> Note that i don't yet currently have a reliable reproduction of this
> for various reasons, not least of which is that simulating network
> error is difficult.

I have a similar problem which I am debugging currently with the
new code. (~1000 submodules which may fail randomly; This is
why I wonder about either automated retries or ignoring the errors)

>
> Any thoughts on this? Should I just have my script that runs my
> continuous integration builds add a check to ensure files are checked
> out? Is "--force" enough to get the submodule to be re-checked out
> even if it's already checked out at the location?

I believe so.

Stefan

>
> Thanks,
> Jake
> --
> To unsubscribe from this list: send the line "unsubscribe git" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]