Re: [RFC PATCH] qcow2: Fix race in cache invalidation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Am 25.09.2014 um 10:41 hat Alexey Kardashevskiy geschrieben:
> On 09/24/2014 07:48 PM, Kevin Wolf wrote:
> > Am 23.09.2014 um 10:47 hat Alexey Kardashevskiy geschrieben:
> >> On 09/19/2014 06:47 PM, Kevin Wolf wrote:> Am 16.09.2014 um 14:59 hat Paolo Bonzini geschrieben:
> >>>> Il 16/09/2014 14:52, Kevin Wolf ha scritto:
> >>>>> Yes, that's true. We can't fix this problem in qcow2, though, because
> >>>>> it's a more general one.  I think we must make sure that
> >>>>> bdrv_invalidate_cache() doesn't yield.
> >>>>>
> >>>>> Either by forbidding to run bdrv_invalidate_cache() in a coroutine and
> >>>>> moving the problem to the caller (where and why is it even called from a
> >>>>> coroutine?), or possibly by creating a new coroutine for the driver
> >>>>> callback and running that in a nested event loop that only handles
> >>>>> bdrv_invalidate_cache() callbacks, so that the NBD server doesn't get a
> >>>>> chance to process new requests in this thread.
> >>>>
> >>>> Incoming migration runs in a coroutine (the coroutine entry point is
> >>>> process_incoming_migration_co).  But everything after qemu_fclose() can
> >>>> probably be moved into a separate bottom half, so that it gets out of
> >>>> coroutine context.
> >>>
> >>> Alexey, you should probably rather try this (and add a bdrv_drain_all()
> >>> in bdrv_invalidate_cache) than messing around with qcow2 locks. This
> >>> isn't a problem that can be completely fixed in qcow2.
> >>
> >>
> >> Ok. Tried :) Not very successful though. The patch is below.
> >>
> >> Is that the correct bottom half? When I did it, I started getting crashes
> >> in various sport on accesses to s->l1_cache which is NULL after qcow2_close.
> >> Normally the code would check s->l1_size and then use but they are out of sync.
> > 
> > No, that's not the place we were talking about.
> > 
> > What Paolo meant is that in process_incoming_migration_co(), you can
> > split out the final part that calls bdrv_invalidate_cache_all() into a
> > BH (you need to move everything until the end of the function into the
> > BH then). His suggestion was to move everything below the qemu_fclose().
> 
> Ufff. I took it very literally. Ok. Let it be
> process_incoming_migration_co(). But there is something I am missing about
> BHs. Here is a patch:
> 
> 
> diff --git a/migration.c b/migration.c
> index 6db04a6..101043e 100644
> --- a/migration.c
> +++ b/migration.c
> @@ -88,6 +88,9 @@ void qemu_start_incoming_migration(const char *uri, Error
> **errp)
>      }
>  }
> 
> +static QEMUBH *migration_complete_bh;
> +static void process_incoming_migration_complete(void *opaque);
> +
>  static void process_incoming_migration_co(void *opaque)
>  {
>      QEMUFile *f = opaque;
> @@ -117,6 +120,16 @@ static void process_incoming_migration_co(void *opaque)
>      } else {
>          runstate_set(RUN_STATE_PAUSED);
>      }
> +
> +    migration_complete_bh = aio_bh_new(qemu_get_aio_context(),
> +                                       process_incoming_migration_complete,
> +                                       NULL);
> +}
> +
> +static void process_incoming_migration_complete(void *opaque)
> +{
> +    qemu_bh_delete(migration_complete_bh);
> +    migration_complete_bh = NULL;
>  }
> 
>  void process_incoming_migration(QEMUFile *f)
> 
> 
> 
> Then I run it under gdb and set breakpoint in
> process_incoming_migration_complete - and it never hits. Why is that? Thanks.

You need to call qemu_bh_schedule().

Kevin

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list




[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]