Pino, thank you for the review. Could you please take a look at patch #42 in this series? It's the one in which I had to add some explicit unlock calls, so it'd be good for someone who knows the code to review this part. On Tue, Apr 14, 2020 at 8:15 AM Pino Toscano <ptoscano@xxxxxxxxxx> wrote: > > On Friday, 10 April 2020 15:54:32 CEST Rafael Fonseca wrote: > > @@ -346,11 +342,12 @@ esxStreamClose(virStreamPtr stream, bool finish) > > { > > int result = 0; > > esxStreamPrivate *priv = stream->privateData; > > + g_autoptr(GMutexLocker) locker = NULL; > > > > if (!priv) > > return 0; > > > > - virMutexLock(&priv->curl->lock); > > + locker = g_mutex_locker_new(&priv->curl->lock); > > > > if (finish && priv->backlog_used > 0) { > > virReportError(VIR_ERR_INTERNAL_ERROR, "%s", > > @@ -360,8 +357,6 @@ esxStreamClose(virStreamPtr stream, bool finish) > > > > stream->privateData = NULL; > > > > - virMutexUnlock(&priv->curl->lock); > > - > > esxFreeStreamPrivate(&priv); > > Careful here, this is a problematic situation: > - the lock is indirectly part of the @priv structure > - esxFreeStreamPrivate calls esxVI_CURL_Free(priv->curl) > - esxVI_CURL_Free calls virMutexDestroy(&item->lock) > - lock is still locked, so it will deadlock (or crash, or something > not good anyway) > > You must unlock the mutex before esxFreeStreamPrivate is called. Ops, nice catch. > I did not check other patches of this long series for similar potential > issues, please do check them. Ok, will do. > > diff --git a/src/esx/esx_vi.c b/src/esx/esx_vi.c > > index 16690edfbe..ed6c6c28cd 100644 > > --- a/src/esx/esx_vi.c > > +++ b/src/esx/esx_vi.c > > @@ -433,14 +429,13 @@ int > > esxVI_CURL_Upload(esxVI_CURL *curl, const char *url, const char *content) > > { > > int responseCode = 0; > > + g_autoptr(GMutexLocker) locker = g_mutex_locker_new(&curl->lock); > > > > if (!content) { > > virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Invalid argument")); > > return -1; > > } > > > > - virMutexLock(&curl->lock); > > - > > Careful #2 here about locking earlier: while usually this is not an > issue, it could be in case the code that was executed without the lock > held can be called by other code branches with the lock held. > > Again, this must be thoroughly checked in the whole patch series. I'll recheck but I tried to do it in places where it was obvious the lock was not held because of an early return case. Att. -- Rafael Fonseca