Brady Catherman <brady@xxxxxxxxxxxxx> wrote: > I have a git repo that fails to clone or fetch over smart-http, but > works great over dav. I am wondering if somebody can help me debug the > issue since I am at a loss why this is happening. Yea, I'm at a loss too. :-( > The interesting parts of a strace of git-http-backend following a git > clone follow: ... > 12037 close(1) = 0 Why did the CGI process just close stdout? I'm guessing this is part of the exec of the upload-pack child in the background. Oh, right, we closed it because we passed the descriptor to the child and now the parent CGI doesn't want it anymore. > 12037 write(1, "Status: 500 Internal Server Error\r\n", 35) = -1 EBADF > (Bad file descriptor) This smells like the backend upload-pack process got into trouble and exited early, so now the CGI is trying to change the status to 500 since the backend exited with a non-zero status. Only its too late, as the filedescriptor was already closed after the successful fork(). We're stuck in a loop because we're failing during the die routine. Because the file descriptor is closed, safe_write() which was what originated that write(1, ...) above, tries to call die(). But that die() call invokes die_webcgi() which in turn tries to write that 500 error message again to 1. So this goes on a for a while... > 12037 --- SIGSEGV (Segmentation fault) @ 0 (0) --- > 12037 +++ killed by SIGSEGV +++ and then we run out of stack space, due to too many recursions, and the process is aborted by a SIGSEGV. > Anybody have any thoughts why this would happen or what can be done to > fix it? A gdb trace or something of the upload-pack process would help. That appears to have also died and we don't know why. Its death is what contributed to the CGI crashing above. I'll try to send a patch for this recursive crashing problem in the CGI. -- Shawn. -- To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html