Re: Git pull stuck when Trace2 target set to Unix Stream Socket

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 4/13/2020 1:18 PM, Jeff King wrote:
On Mon, Apr 13, 2020 at 02:05:00PM +0200, Son Luong Ngoc wrote:

I am trying to write a simple git trace2 event collector and I notice
that when git doing git pull with trace events being sent to a unix
stream socket, the entire operation halted.

Reproduce as follow:
```
cd git/git
git config trace2.eventTarget af_unix:stream:/tmp/git_trace.sock
git config trace2.eventBrief false
(rm /tmp/git_trace.sock | ) &&  nc -lkU /tmp/git_trace.sock

# In a different terminal
git pull # Pull stuck and never complete
```

I think the issue is the use of netcat as the server side.

Your git-pull involves multiple simultaneously-running Git processes.
But "nc -k" will only accept() a new client once the old one has
disconnected. So we'd deadlock any time we have this situation:

   - process A opens a stream to the socket, and keeps it open

   - process A spawns process B and waits for it to finish

   - process B tries to open a stream to the socket, which will block
     waiting for netcat to accept()

Now A cannot make forward progress until B finishes, but B will not make
forward progress until A closes the socket.

I was able to reproduce the issue locally, and process "A" was git-pull
and process "B" was git-merge.

Thanks for the great explanation.  Yes, each Git command will open
its own connection to the socket, so you need your server to be
able to process multiple incoming requests, such as the usual listen()
loop.

There is a "trace2.destinationdebug" aka GIT_TRACE2_DST_DEBUG which
when set to a positive integer will print warning messages when
attempting to open the trace2 files or sockets.  This might help you
track down issues.  (These aren't on by default.)

You might find it easier to set the trace2 path to that of an existing
directory.  Then each Git command will create a file, so you don't have
to worry about interleaved output or having your server be alive at all
times.  You could just let the files accumulate in that directory and
have a cron job process and delete them periodically.

Jeff



[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux