Re: [PATCH v2 8/8] migration: do not flush_compressed_data at the end of each iteration

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jul 23, 2018 at 04:53:11PM +0800, Xiao Guangrong wrote:
> 
> 
> On 07/23/2018 04:35 PM, Peter Xu wrote:
> > On Mon, Jul 23, 2018 at 04:05:21PM +0800, Xiao Guangrong wrote:
> > > 
> > > 
> > > On 07/23/2018 01:49 PM, Peter Xu wrote:
> > > > On Thu, Jul 19, 2018 at 08:15:20PM +0800, guangrong.xiao@xxxxxxxxx wrote:
> > > > > From: Xiao Guangrong <xiaoguangrong@xxxxxxxxxxx>
> > > > > 
> > > > > flush_compressed_data() needs to wait all compression threads to
> > > > > finish their work, after that all threads are free until the
> > > > > migration feeds new request to them, reducing its call can improve
> > > > > the throughput and use CPU resource more effectively
> > > > > 
> > > > > We do not need to flush all threads at the end of iteration, the
> > > > > data can be kept locally until the memory block is changed or
> > > > > memory migration starts over in that case we will meet a dirtied
> > > > > page which may still exists in compression threads's ring
> > > > > 
> > > > > Signed-off-by: Xiao Guangrong <xiaoguangrong@xxxxxxxxxxx>
> > > > > ---
> > > > >    migration/ram.c | 15 ++++++++++++++-
> > > > >    1 file changed, 14 insertions(+), 1 deletion(-)
> > > > > 
> > > > > diff --git a/migration/ram.c b/migration/ram.c
> > > > > index 89305c7af5..fdab13821d 100644
> > > > > --- a/migration/ram.c
> > > > > +++ b/migration/ram.c
> > > > > @@ -315,6 +315,8 @@ struct RAMState {
> > > > >        uint64_t iterations;
> > > > >        /* number of dirty bits in the bitmap */
> > > > >        uint64_t migration_dirty_pages;
> > > > > +    /* last dirty_sync_count we have seen */
> > > > > +    uint64_t dirty_sync_count;
> > > > 
> > > > Better suffix it with "_prev" as well?  So that we can quickly
> > > > identify that it's only a cache and it can be different from the one
> > > > in the ram_counters.
> > > 
> > > Indeed, will update it.
> > > 
> > > > 
> > > > >        /* protects modification of the bitmap */
> > > > >        QemuMutex bitmap_mutex;
> > > > >        /* The RAMBlock used in the last src_page_requests */
> > > > > @@ -2532,6 +2534,7 @@ static void ram_save_cleanup(void *opaque)
> > > > >        }
> > > > >        xbzrle_cleanup();
> > > > > +    flush_compressed_data(*rsp);
> > > > 
> > > > Could I ask why do we need this considering that we have
> > > > compress_threads_save_cleanup() right down there?
> > > 
> > > Dave ask it too. :(
> > > 
> > > "This is for the error condition, if any error occurred during live migration,
> > > there is no chance to call ram_save_complete. After using the lockless
> > > multithreads model, we assert all requests have been handled before destroy
> > > the work threads."
> > > 
> > > That makes sure there is nothing left in the threads before doing
> > > compress_threads_save_cleanup() as current behavior. For lockless
> > > mutilthread model, we check if all requests are free before destroy
> > > them.
> > 
> > But why do we need to explicitly flush it here?  Now in
> > compress_threads_save_cleanup() we have qemu_fclose() on the buffers,
> > which logically will flush the data and clean up everything too.
> > Would that suffice?
> > 
> 
> Yes, it's sufficient for current thread model, will drop it for now
> and add it at the time when the lockless mutilthread model is applied. :)

Ah I think I see your point.  Even if so I would think it better to do
any extra cleanup directly in compress_threads_save_cleanup() if
possible.

Regards,

-- 
Peter Xu



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux