Re: [RFC Design Doc]Speed up live migration by skipping free pages

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Mar 23, 2016 at 02:35:42PM +0000, Li, Liang Z wrote:
>> >No special purpose. Maybe it's caused by the email client. I didn't
>> >find the character in the original doc.
>> >
>> 
>> https://lists.gnu.org/archive/html/qemu-devel/2016-03/msg00715.html
>> 
>> You could take a look at this link, there is a '>' before From.
>
>Yes, there is. 
>
>> >> >
>> >> >6. Handling page cache in the guest
>> >> >The memory used for page cache in the guest will change depends on
>> >> >the workload, if guest run some block IO intensive work load, there
>> >> >will
>> >>
>> >> Would this improvement benefit a lot when guest only has little free page?
>> >
>> >Yes, the improvement is very obvious.
>> >
>> 
>> Good to know this.
>> 
>> >> In your Performance data Case 2, I think it mimic this kind of case.
>> >> While the memory consuming task is stopped before migration. If it
>> >> continues, would we still perform better than before?
>> >
>> >Actually, my RFC patch didn't consider the page cache, Roman raised this
>> issue.
>> >so I add this part in this doc.
>> >
>> >Case 2 didn't mimic this kind of scenario, the work load is an memory
>> >consuming work load, not an block IO intensive work load, so there are
>> >not many page cache in this case.
>> >
>> >If the work load in case 2 continues, as long as it not write all the
>> >memory it allocates, we still can get benefits.
>> >
>> 
>> Sounds I have little knowledge on page cache, and its relationship between
>> free page and I/O intensive work.
>> 
>> Here is some personal understanding, I would appreciate if you could correct
>> me.
>> 
>>                 +---------+
>>                 |PageCache|
>>                 +---------+
>>       +---------+---------+---------+---------+
>>       |Page     |Page     |Free Page|Page     |
>>       +---------+---------+---------+---------+
>> 
>> Free Page is a page in the free_list, PageCache is some page cached in CPU's
>> cache line?
>
>No, page cache is quite different with CPU cache line.
>" In computing, a page cache, sometimes also called disk cache,[2] is a transparent cache
> for the pages originating from a secondary storage device such as a hard disk drive (HDD).
> The operating system keeps a page cache in otherwise unused portions of the main
> memory (RAM), resulting in quicker access to the contents of cached pages and 
>overall performance improvements "
>you can refer to https://en.wikipedia.org/wiki/Page_cache
>for more details.
>

My poor knowledge~ Should google it before I imagine the meaning of the
terminology.

If my understanding is correct, the Page Cache is counted as Free Page, while
actually we should migrate them instead of filter them.

>
>> When memory consuming task runs, it leads to little Free Page in the whole
>> system. What's the consequence when I/O intensive work runs? I guess it
>> still leads to little Free Page. And will have some problem in sync on
>> PageCache?
>> 
>> >>
>> >> I am thinking is it possible to have a threshold or configurable
>> >> threshold to utilize free page bitmap optimization?
>> >>
>> >
>> >Could you elaborate your idea? How does it work?
>> >
>> 
>> Let's back to Case 2. We run a memory consuming task which will leads to
>> little Free Page in the whole system. Which means from Qemu perspective,
>> little of the dirty_memory is filtered by Free Page list. My original question is
>> whether your solution benefits in this scenario. As you mentioned it works
>> fine. So maybe this threshold is not necessary.
>> 
>I didn't quite understand your question before. 
>The benefits we get depends on the  count of free pages we can filter out.
>This is always true.
>
>> My original idea is in Qemu we can calculate the percentage of the Free Page
>> in the whole system. If it finds there is only little percentage of Free Page,
>> then we don't need to bother to use this method.
>> 
>
>I got you. The threshold can be used for optimization, but the effect is very limited.
>If there are only a few of free pages, the process of constructing the free page
>bitmap is very quick. 
>But we can stop doing the following things, e.g. sending the free page bitmap and doing
>the bitmap operation, theoretically, that may help to save some time, maybe several ms.
>

Ha, you got what I mean.

>I think a VM has no free pages at all is very rare, in the worst case, there are still several
> MB of free pages. The proper threshold should be determined by comparing  the extra
> time spends on processing the free page bitmap and the time spends on sending
>the several MB of free pages though the network. If the formal is longer, we can stop
>using this method. So we should take the network bandwidth into consideration, it's 
>too complicated and not worth to do.
>

Yes, after some thinking, it maybe not that easy and worth to do this
optimization.

>Thanks
>
>Liang
>> Have a nice day~
>> 
>> >Liang
>> >
>> >>
>> >> --
>> >> Richard Yang\nHelp you, Help me
>> 
>> --
>> Richard Yang\nHelp you, Help me
>�{.n�+�������+%��lzwm��b�맲��r��zK�{ay�ʇڙ�,j��f���h���z��w������j:+v���w�j�m��������zZ+�����ݢj"��!�i
-- 
Richard Yang\nHelp you, Help me
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux