Re: [PATCH spice-server v2 3/3] Fix crash attempting to connect duplicate channels

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 2017-09-07 at 03:50 -0400, Frediano Ziglio wrote:
> > 
> > On Wed, 2017-08-30 at 10:36 +0100, Frediano Ziglio wrote:
> > > You could easily trigger this issue using multiple monitors and
> > > a modified spice-gtk client with this patch:
> > > 
> > > --- a/src/channel-main.c
> > > +++ b/src/channel-main.c
> > > @@ -1699,6 +1699,7 @@ static gboolean _channel_new(channel_new_t
> > > *c)
> > >  {
> > >      g_return_val_if_fail(c != NULL, FALSE);
> > > 
> > > +    if (c->type == SPICE_CHANNEL_DISPLAY) c->id = 0;
> > >      spice_channel_new(c->session, c->type, c->id);
> > > 
> > >      g_object_unref(c->session);
> > > 
> > > 
> > > which cause a crash like
> > > 
> > > (process:28742): Spice-WARNING **: Failed to create channel
> > > client:
> > > Client 0x40246f5d0: duplicate channel type 2 id 0
> > > 2017-08-24 09:36:57.451+0000: shutting down, reason=crashed
> > 
> > It took me quite a while to figure out how this crash is related to
> > the
> > patch below. Perhaps it needs additional detail in the commit log?
> > As
> > far as I can tell, it's like this:
> > 
> > RedChannelClient is an GInitable type, which means that the object
> > is
> > constructed, and then the _init() function is called, which can
> > fail.
> > If the _init() fails, the newly-created object will be destroyed.
> > As
> > part of _init(), we add a new watch for the stream using the core
> > interface that is associated with the channel. After adding the
> > watch,
> > our rcc creation fails (due to duplicate ID), and the rcc object is
> > unreffed. This results in a call to reds_stream_free() (since the
> > rcc
> > now owns the stream). But in reds_stream_free, we were trying to
> > remove
> > the watch from the core interface associated with the RedsState.
> > For
> > most channels, these two core interfaces are equivalent. But for
> > the
> > Display and Cursor channels, it is the core Glib-based interface
> > associated with the RedWorker.
> > 
> 
> You already know where this paragraph will end, right ?
> Yes!

fine with me ;)

> 
> > Do I understand it correctly?
> > 
> > > 
> > > The watch in RedsStream by default is bound to the Qemu
> > > provided SpiceCoreInterface while RedChannelClient bound it
> > > to Glib one causing the crash when the watch is deleted from
> > > RedsStream. Change the bound interface.
> > > 
> > > Signed-off-by: Frediano Ziglio <fziglio@xxxxxxxxxx>
> > > ---
> > >  server/red-channel-client.c | 14 +++++++++++++-
> > >  1 file changed, 13 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/server/red-channel-client.c b/server/red-channel-
> > > client.c
> > > index 145ba43f..acc2b4eb 100644
> > > --- a/server/red-channel-client.c
> > > +++ b/server/red-channel-client.c
> > > @@ -339,6 +339,16 @@ red_channel_client_finalize(GObject *object)
> > >  {
> > >      RedChannelClient *self = RED_CHANNEL_CLIENT(object);
> > >  
> > > +    SpiceCoreInterfaceInternal *core =
> > > red_channel_get_core_interface(self->priv->channel);
> > > +    if (self->priv->latency_monitor.timer) {
> > > +        core->timer_remove(core, self->priv-
> > > >latency_monitor.timer);
> > > +        self->priv->latency_monitor.timer = NULL;
> > > +    }
> > > +    if (self->priv->connectivity_monitor.timer) {
> > > +        core->timer_remove(core, self->priv-
> > > > connectivity_monitor.timer);
> > > 
> > > +        self->priv->connectivity_monitor.timer = NULL;
> > > +    }
> > > +
> > 
> > I'm a little bit confused here. Is this part related at all to the
> > crash in the commit message? It doesn't seem to be.
> > 
> 
> Yes, in case monitoring is enabled (currently DCC) the monitoring
> timer is registered causing a use-after-free too.
> Not both timers are needed but I think that a finalize is more
> safer and robust if it handles any possible status.
> 
> > >      reds_stream_free(self->priv->stream);
> > >      self->priv->stream = NULL;
> > >  
> > > @@ -921,12 +931,14 @@ static gboolean
> > > red_channel_client_initable_init(GInitable *initable,
> > >      }
> > >  
> > >      core = red_channel_get_core_interface(self->priv->channel);
> > > -    if (self->priv->stream)
> > > +    if (self->priv->stream) {
> > > +        reds_stream_set_core_interface(self->priv->stream,
> > > core);
> > >          self->priv->stream->watch =
> > >              core->watch_add(core, self->priv->stream->socket,
> > >                              SPICE_WATCH_EVENT_READ,
> > >                              red_channel_client_event,
> > >                              self);
> > > +    }
> > 
> > It seems a little bit weird that the RedChannelClient is poking
> > into
> > the stream structure to set stream->watch...
> > 
> 
> Not a regression of this patch. I found safer making sure that
> RedsStream can release the watch safely as the code already
> automatically do it. The previous patch instead added a "watch"
> in RCC but was more intrusive.

Yes, this comment was not about your patch, just a general comment
about the code. I wasn't trying to imply that you should change it in
this patch.

> 
> > What about moving this whole section down below the call to
> > red_client_add_channel() (which is the call that can fail) in order
> > to
> > avoid creating the watch at all when our rcc creation fails?
> > 
> 
> I can do but I prefer a more robust code instead of having
> finalize implementation depending on red_channel_client_initable_init
> implementation.

Oh, I did not mean that this section of code could be moved *instead
of* the other fixes in your patch. I meant *in addition to*. But you're
right that there could be some unforseen consequences, so maybe it's
not worth it.

> I won't be surprised if moving these watch/timers could lead to
> different regressions. Registering to RedClient (which runs in
> another thread)
> can lead to the usage of RCC for a short while. What would happen for
> instance if a migration starts just at the same time?
> We probably potentially would retain the RCC pointer a bit longer.
> This is the reason for instance why RCC is registered before in
> RedChannel then in RedClient. Doing the other way around would lead
> to a disconnected but connected RCC (disconnected as
> red_channel_client_is_connected
> rely on the link RedChannel -> RCC while connected as it has a open
> TCP connection).
> 
> > >  
> > >      if (self->priv->monitor_latency
> > >          && reds_stream_get_family(self->priv->stream) !=
> > > AF_UNIX) {
> 
> Frediano
_______________________________________________
Spice-devel mailing list
Spice-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.freedesktop.org/mailman/listinfo/spice-devel




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [ECOS]     [Asterisk Internet PBX]     [Linux API]     [Monitors]