Re: KVM performance Java server/MySQL...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Other optimizations people are testing out there.

- use "nohz=off" in the kernel loading line y menu.lst
- Disable Cgroups completely. Using cgclear, and turning off cgred
cg-config daemons.

And from a Personal point of view, we've always tried to use MySQL in
a different server from JBoss.
99% of the times is far better for performance and tuning.

David

2013/2/8 Erik Brakkee <erik@xxxxxxxxxxx>:
> <quote who="Erik Brakkee">
>> <quote who="Gleb Natapov">
>>> On Thu, Feb 07, 2013 at 04:41:31PM +0100, Erik Brakkee wrote:
>>>> Hi,
>>>>
>>>>
>>>> We have been benchmarking a java server application (java 6 update 29)
>>>> that requires a mysql database. The scenario is quite simple. We open a
>>>> web page which displays a lot of search results. To get the content of
>>>> the
>>>> page one big query is done with many smaller queries to retrieve the
>>>> data.
>>>> The test from the java side is single threaded.
>>>>
>>>> We have used the following deployment scenarios:
>>>> 1. JBoss in VM, MySql in separate VM
>>>> 2. JBoss in VM, MySQL native
>>>> 3. JBoss native, MySQL in vm.
>>>> 4. JBoss native and MySQL native on the same physical machine
>>>> 5. JBoss and MySQL virtualized on the same VM.
>>>>
>>>> What we see is that the performance (time to execute) is practically
>>>> the
>>>> same for all scenarios (approx. 30 seconds), except for scenario 4 that
>>>> takes approx. 21 seconds. This difference is quite large and contrasts
>>>> many other test on the internet and other benchmarks we did previously.
>>>>
>>>> We have tried pinning the VMs, turning hyperthreading off, varying the
>>>> CPU
>>>> model (including host-passthrough), but this did not have any
>>>> significant
>>>> impact.
>>>>
>>>> The hardware on which we are running is a dual socket E5-2650 machine
>>>> with
>>>> 64 GB memory. The server is a Dell poweredge R720 server with SAS
>>>> disks,
>>>> RAID controller with battery backup (writeback cache). Transparent huge
>>>> pages is turned on.
>>>>
>>>> We are at a loss to explain the differences in the test. In particular,
>>>> we
>>>> would have expected the least performance when both were running
>>>> virtualized and we would have expected a better performance when JBoss
>>>> and
>>>> MySQL were running virtualized in the same VM as compared to JBoss and
>>>> MySQL both running in different virtual machines. It looks like we are
>>>> dealing with multiple issues here and not just one.
>>>>
>>>> Right now we have a 30% penalty for running virtualized which is too
>>>> much
>>>> for us; 10% would be allright. What would you suggest to do to
>>>> troubleshoot this further?
>>>>
>>>
>>> What is you kernel/qemu versions and command line you are using to start
>>> a VM?
>>
>> Centos 6.3, 2.6.32-279.22.1.el6.x86_64
>>> rpm -qf /usr/libexec/qemu-kvm
>> qemu-kvm-0.12.1.2-2.295.el6_3.10.x86_64
>>
>> The guest is also running centos 6.3 with the same settings. Settings that
>> can influence Java performance (such as transparent huge pages) are turned
>> on on both the host and guest (see the remark on hugepages below).
>>
>> The command-line is as follows:
>>
>> /usr/libexec/qemu-kvm -S -M rhel6.3.0 -enable-kvm -m 8192 -mem-prealloc
>> -mem-path /hugepages/libvirt/qemu -smp 4,sockets=4,cores=1,threads=1 -name
>> master-data05-v50 -uuid 79ddd84d-937e-357b-8e57-c7f487dc3464 -nodefconfig
>> -nodefaults -chardev
>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/master-data05-v50.monitor,server,nowait
>> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
>> -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
>> if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device
>> ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
>> -drive
>> file=/dev/raid5/v50disk1,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=native
>> -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=2
>> -drive
>> file=/dev/vg_raid1/v50disk2,if=none,id=drive-virtio-disk1,format=raw,cache=none,aio=native
>> -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1
>> -drive
>> file=/dev/raid5/v50disk3,if=none,id=drive-virtio-disk2,format=raw,cache=none,aio=native
>> -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x9,drive=drive-virtio-disk2,id=virtio-disk2
>> -drive
>> file=/var/data/images/configdisks/v50/configdisk.img,if=none,id=drive-virtio-disk25,format=raw,cache=none
>> -device
>> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk25,id=virtio-disk25
>> -netdev tap,fd=21,id=hostnet0,vhost=on,vhostfd=22 -device
>> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:00:01:50,bus=pci.0,addr=0x3
>> -chardev pty,id=charserial0 -device
>> isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:0 -vga cirrus
>> -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
>>
>>
>>
>> This virtual machine has three virtio disks, and one file based disk. The
>> last disk is about 100MB in size and is used only during startup (contains
>> configurationd data for initializing the vm) and is only read and never
>> written. It has one CDrom which is not used. It also uses old-style
>> hugepages. Apparently this did not have any significant effect on
>> performance over transparent hugepages (as would be expected). We
>> configured these old style hugepages just to rule out any issue with
>> transparent hugepages.
>>
>> Initially we got 30% performance penalty with 16 processors, but in the
>> current setting of using 4 processors we see a reduced performance penalty
>> of 15-20%. Also on the physical host, we are not running the numad daemon
>> at the moment. Also, we tried disabling hyperthreading in the host's BIOS
>> but the measurements do not change significantly.
>>
>> The IO scheduler on the host and on the guest is CFS. We also tried with
>> deadline scheduler on the host but this did not make any measurable
>> difference. We did not try no-op on the host. Additionally, the test is a
>> read-only test. It queries the database for data. We are excluding the
>> first measurement, so disk IO cannot be a problem as far as the database
>> is concerned. For the application (running on JBoss), the IO is basically
>> limited to network IO of a single web page and it writes a few lines in a
>> server log file.
>>
>> This means that we can effectively rule out disk IO as a problem I think,
>> which limits it to CPU and memory issues. I also epxiermented with the cpu
>> mode 'host-passthrough' but that also did not result in an improvement.
>>
>> Other things I could try is to remove the 'memballoon' configuration from
>> the VM.
>>
>> Do you have any idea what could be the cause of this? Do you think it is a
>> numa problem? Memory related perhaps? Do you have any suggestions on
>> things to configure and/or measure?
>>
>
> One of the changes I made was disabling the memballoon configuration for
> the virtual machine. This also helped performance a bit.
>
> In addition, I have been comparing the server configurations now in detail
> of the physical and virtual server and I found one difference in the mysql
> configuration which was the use of the tcmalloc memory allocator from the
> gperftools-libs RPM in mysql on the physical host.
>
> After updating the virtual machine to also use this memory allocator, the
> difference between virtual and physical is now only a 10% penalty for
> virtual, which is an acceptable overhead. This is comparing the scenario
> with mysql and jboss colocated on the same virtual/physical machine.
>
> Nevertheless, we still welcome any suggestions for improving this further.
>
> Is 10% an acceptable penalty for virtualization or do you think we should
> be able to get more out of it?
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux