Paras pradhan wrote:
I have a GFS based shared storage cluster that connects to SAN by fibre
channel. This GFS shared storage hold several virtual machines. While
running hdparam from the host to a GFS share, I get following results.
--
hdparm -t /guest_vms1
/dev/mapper/test_vg1-prd_vg1_lv:
Timing buffered disk reads: 262 MB in 3.00 seconds = 87.24 MB/sec
---
Now from within the virtual machines, the I/O is low
---
hdparm -t /dev/mapper/VolGroup00-LogVol00
/dev/mapper/VolGroup00-LogVol00:
Timing buffered disk reads: 88 MB in 3.00 seconds = 29.31 MB/sec
---
I am looking for possibilities if I can increase my I/O read write
within my virtual machines. Tuning GFS does help in this case?
Sorry if my question is not relevant to this list
I suspect you'll find that is pretty normal for virtualization-induced
I/O penalty. Virtualization really, trully, utterly sucks when it comes
to I/O performance.
My I/O performance tests (done using kernel building) show that the
bottleneck was always disk I/O (including when the entire kernel source
tree is pre-cached, with a 2GB of RAM guest. The _least_ horribly
performing virtualization solution was VMware (tested with latest player
3.0, but verified against the latest server, too). That managed to
complete the task in "only" 140% of the time the bare metal machine did
(the host machine had it's memory limited to 2GB with the mem= kernel
option to make sure the test was fair). So, 40% slower than bare metal.
Paravirtualized Xen was close behind, followed very closely by
non-paravirtualized KVM (which was actually slower when paravirtualized
drivers were used!). VirtualBox came so far behind it's not even worth
mentioning.
Nevertheless, it shows that the whole "performance being close to bare
metal" premise is completely mythical and comes from very selective
tests (e.g. only testing CPU intensive tasks). But then again we all
knew that, right?
Gordan
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster