On 23 August 2010 22:16, Ævar Arnfjörð Bjarmason <avarab@xxxxxxxxx> wrote: > On Mon, Aug 23, 2010 at 19:58, demerphq <demerphq@xxxxxxxxx> wrote: >> On 23 August 2010 21:43, Ævar Arnfjörð Bjarmason <avarab@xxxxxxxxx> wrote: >>> On Mon, Aug 23, 2010 at 19:33, demerphq <demerphq@xxxxxxxxx> wrote: >>>> On 23 August 2010 19:59, Ævar Arnfjörð Bjarmason <avarab@xxxxxxxxx> wrote: >>>>> On Sat, Aug 21, 2010 at 11:54, demerphq <demerphq@xxxxxxxxx> wrote: >>>>>> Today I was trying to pull some updates over my wlan connection at the >>>>>> hotel I'm in right now. >>>>>> >>>>>> For some reason it repeatedly hung. I tried using the git protocol, >>>>>> and using ssh, each time it hung at the same point (object transfer - >>>>>> and after the same number of objects). >>>>>> >>>>>> Eventually I opened a tunnel, with control master enabled to camel >>>>>> (obviously not everybody can do this), and then tried to pull using >>>>>> the established tunnel. At which point it pulled just fine - and damn >>>>>> fast. >>>>>> >>>>>> Anybody else experienced strangeness like this? Could we have a glitch >>>>>> somewhere? >>>>> >>>>> It would help to clarify what the strangeness is, but obviously you >>>>> can't debug it *now*. >>>>> >>>>> If you have issues like this one useful thing is to try to use the >>>>> plumbing tools to see if you can reproduce the issue. E.g. use >>>>> git-fetch, and stuff like git-receive-pack / git-send-pack if you can. >>>> >>>> I actually did use git-fetch. Same thing. It was weird. I had about >>>> 1200 objects to transfer, after, i think, 345 objects it just hung. >>>> For minutes, after which i killed it. I tried again, and it hung >>>> again, etc, and like I said until I had opened a tunnel to camel and >>>> switched to ssh it huing every time, with ssh as the protocol and with >>>> git as the protocol. >>>> >>>> I actually still have the repo in unpulled form, so ill try again, >>>> what exactly should I do to obtain better diagnostics? >>> >>> To start with, add the Git mailing list to the CC-list, which I've >>> just done. >>> >>> I don't know what you should do exactly, but...: >>> >>> * If you rsync the perl.git repository from camel to somewhere else >>> and use ssh+git to *there* does it still hang? Maybe you can make >>> both copies of perl.git available online for others to try? >>> >>> * How does it hang? Run it with GIT_TRACE=1 <your commands>, What >>> process hangs exactly? Is it using lots of CPU or memory in top? >>> How about if you strace it, is it hanging on something there? >>> >>> * Does this all go away if you you upgrade git (e.g. build from >>> master git.git) on either the client or server? >>> >>> * If not, maybe run it under gdb with tracing and see where it hangs? >>> >>> ..would seem like good places to start. >> >> Ill try some of the above and follow up... Well, as soon as i find the >> usb stick with the unpulled repo copy. :-) > > Sweet, thanks. > >>>>>> Also, I noticed that git-web, or perhaps our config of it, has a >>>>>> glitch when using pick-axe. It seems to die in mid processing >>>>>> (probably a timeout) and thus returns broken XML/HTML to the browser, >>>>>> which in turn inconveniently means that firefox shows an XML error and >>>>>> doesn't show the results that it /has/ found. Im wondering if there is >>>>>> anything we should do about this? >>>>> >>>>> What were you looking at when you got the XML error? There was a >>>>> recent report about this to the git list and it's been solved upstream >>>>> IIRC. It was a simple matter of a missing escape_binary_crap() >>>>> somewhere. >>>> >>>> I was doing a pick-axe search for PERL_STRING_ROUNDUP (however it is >>>> actually spelled), after about 5 minutes the connection terminated and >>>> resulted in broken output... >>> >>> What's the gitweb link for that? I'm not familiar with how to make it >>> do a blame search. >> >> Select "pickaxe" in the drop down on the perl5 gitweb, and then search >> for PERL_STRLEN_ROUNDUP >> >> The url generated is: >> >> http://perl5.git.perl.org/perl.git?a=search&h=HEAD&st=pickaxe&s=PERL_STRLEN_ROUNDUP >> >> Currently its running for me, and obviously wed prefer that we dont >> have N-gazillion people doing the search at once.... >> >> Ah, it just finished... Same problem. I get the error: >> >> XML Parsing Error: no element found >> Location: http://perl5.git.perl.org/perl.git?a=search&h=HEAD&st=pickaxe&s=PERL_STRLEN_ROUNDUP >> Line Number 81, Column 1: >> >> And the last couple of lines of the HTML are: >> >> </td> >> <td class="link"><a >> href="/perl.git/commit/7a9b70e91d2c0aa19f8cec5b0f8c133492a19280">commit</a> >> | <a href="/perl.git/tree/7a9b70e91d2c0aa19f8cec5b0f8c133492a19280">tree</a></td> >> </tr> >> <tr class="light"> >> >> seems to me like it timed out while searching.... >> >> Makes me think the search logic would work better as an incremental >> asynchronous fetch.... > > Ah, sounds like it's running a really expensive operation and then > running into the cgi time execution limit on the webserver (or maybe > in gitweb), so when the connection closes the browser ends up with > invalid XHTML. Yeah, exactly, thats what i meant by "timeout". > An async fetch would only make sense in that case if your gitweb and > webserver timeouts made sense, i.e. the gitweb timeout was say 1-2 sec > less than the webserver timeout. Well i was thinking it could search for a single item, and then stop, and the search again from there, etc... So each search would be lighter weight... > Anyway, it has nothing to do with the escaping bug I cited above. Nod, I suspected as much. Yves -- perl -Mre=debug -e "/just|another|perl|hacker/" -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html