Re: measuring kernel speed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On Mon, May 10, 2010 at 11:52 AM, Les Mikesell <lesmikesell@xxxxxxxxx> wrote:
> On 5/10/2010 8:56 AM, Ross Walker wrote:
>>
>>>
>>> Would this also be suitable for testing efficiency loss from running
>>> under
>>> VMware or other virtualization methods?
>>
>> No because oprofile and latencytop's point of reference is just the
>> running kernel and doesn't factor in CPU allocations, network/disk
>> virualization/para-virtualization, bandwidth allocations, etc.
>>
>> Efficiency loss is a slippery slope and VERY configuration dependant.
>>
>> I have seen VMs perform better than physical machines and I have seen
>> them perform worse, sometimes on the same physical host!
>>
>> Go with the "user experience" indicator (assuming it is properly
>> configured for the workload). Does it seem fast? Then it's fast. Does
>> it seem slow? Then it is slow.
>
> Realistically, VM performance is going to depend mostly on how much
> contention you have between guests for common resources especially if
> you overcommit them.  But, I'd like to have some idea of how much effect
> running under VMware ESXi would have for a single guest, compared to
> running directly on the hardware.  If there's not a big loss (and it
> doesn't 'feel' like there is), I'd consider this worthwhile for servers
> doing oddball things where its not worth the trouble to script a
> re-install for every little app someone might have running as a means to
> deal with the usual pain of moving a working system to different
> hardware.  Plus, if there is extra capacity you can bring up another
> virtual machine or test the next version almost for free, and you get an
> almost-hardware level kvm too (after the base install works and you have
> an IP address...).  I'd just like to have a more objective measure of
> what it costs in performance.

With ESXi you can really control the contention on resources with
allocation policies, so applications that really need the resources
get them.

As with any system these days the biggest contention is going to be
disk and network. Make sure storage is setup appropriately for the
application, just cause the server is virtual doesn't mean you can
lump all the application's data onto one common datastore, keep a
datastore for the OS (which can be shared for all VMs) and a separate
iSCSI/Fiber datastore/lun for each application's data. You can use
RDMs if your OS is on a VMFS datastore, or do iSCSI directly in the VM
if you use NFS datastores for the OS.

You will notice minimal degradation running a single VM under ESXi.

I have ESXi hosts here running 20 VMs per host with some doing
terminal services, some doing email, some doing database and other
network services and I have not noticed any diminished performance,
and yes going virtual is simply the easiest way to perform upgrades.

-Ross
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux