Re: [PATCH v2] kvm tools: Add QCOW level2 caching support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Jun 2, 2011 at 9:36 AM, Prasad Joshi <prasadjoshi124@xxxxxxxxx> wrote:
> On Thu, Jun 2, 2011 at 8:28 AM, Ingo Molnar <mingo@xxxxxxx> wrote:
>>
>> * Prasad Joshi <prasadjoshi124@xxxxxxxxx> wrote:
>>
>>> Summary of performance numbers
>>> ==============================
>>> There is not much difference with sequential character operations are
>>> performed, the code with caching performed better by small margin. The caching
>>> code performance raised by 12% with sequential block output and dropped by
>>> 0.5% with sequential block input. The caching code also suffered with
>>> Random seeks and performed badly by 12%. The performance numbers drastically
>>> improved with sequential creates (62%) and delete operations (30%).
>>
>> Looking at the numbers i think it's pretty clear that from this point
>> on the quality of IO tests should be improved: Bonnie is too noisy
>> and does not cut it anymore for finer enhancements.
>>
>> To make measurements easier you could also do a simple trick: put
>> *all* of the disk image into /dev/shm and add a command-line debug
>> option that add a fixed-amount udelay(1000) call every time the code
>> reads from the disk image.
>>
>> This introduces a ~1msec delay and thus simulates IO, but the delays
>> are *constant* [make sure you use a high-res timers kernel], so they
>> do not result in nearly as much measurement noise as real block IO
>> does.
>>
>> The IO delays will still be there, so any caching advantages (and CPU
>> overhead reductions) will be measurable very clearly.
>>
>> This way you are basically 'emulating' a real disk drive but you will
>> emulate uniform latencies, which makes measurements a lot more
>> reliable - while still relevant to the end result.
>>
>> So if under such a measurement model you can prove an improvement
>> with a patch, that improvement will be there with real disks as well
>> - just harder to prove.
>>
>> Wanna try this? I really think you are hitting the limits of your
>> current measurement methodology and you will be wasting time running
>> more measurements instead of saving time doing more intelligent
>> measurements ;-)
>>
>
> Thanks Ingo, will try this method.
>
> I am not sure how to induce the delay you mentioned. I could vaguely
> remember you suggesting something similar few days back. Let me see if
> I can find out the correct arguments to use this delay, otherwise will
> post a query.
>

I repeated the test after copying the image file on /dev/shm as
suggested by Ingo

BEFORE CACHING
=================
$ bonnie++ -d tmp/ -c 2 -s 1024
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   2     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
prasad-virtual-m 1G   996  99 433356  59 172755  42  5331 100 339838
49 15534 749
Latency             17704us   49654us   61299us    6152us    1838us     106ms
Version  1.96       ------Sequential Create------ --------Random Create--------
prasad-virtual-mach -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency               475us     315us     358us     565us      56us      91us
1.96,1.96,prasad-virtual-machine,2,1307032968,1G,,996,99,433356,59,172755,42,5331,100,339838,49,15534,749,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,17704us,49654us,61299us,6152us,1838us,106ms,475us,315us,358us,565us,56us,91us


AFTER CACHING
===============
$ bonnie++ -d tmp/ -c 2 -s 1024
Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   2     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
prasad-virtual-m 1G  1054  99 558323  74 226297  55  5436  99 504042
70 +++++ +++
Latency             12776us   23582us   39912us    6778us   20050us   30421us
Version  1.96       ------Sequential Create------ --------Random Create--------
prasad-virtual-mach -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency               334us     391us     427us     472us      82us      79us
1.96,1.96,prasad-virtual-machine,2,1307032815,1G,,1054,99,558323,74,226297,55,5436,99,504042,70,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,12776us,23582us,39912us,6778us,20050us,30421us,334us,391us,427us,472us,82us,79us

During both the tests the machine was started with 512MB memory and
test was performed without any additional the IO delay.

The tests shows that caching improves the performance. Thanks Ingo.

Regards,
Prasad


> Thanks and Regards,
> Prasad
>
>> Thanks,
>>
>>        Ingo
>>
>
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux