Re: VM Philosophy 101 - How many and for what?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]



On 11/14/2011 10:58 AM, James B. Byrne wrote:
> We are at the stage, finally, where we are prepared to
> deploy public facing VM guests.  Now we have to answer
> these questions: How many guests in total should we
> contemplate and what services are placed on which guests?
> 
> My question comes down to whether it is considered
> advisable to run the primary DNS and IMAP, or indeed all
> services, on separate vm guests; or should we continue,
> more or less, with the present split and just move
> everything into guests on a one-for-one basis from our
> existing hosts?  Will additional vms necessarily increase
> the amount of time given to system maintenance as I
> suspect?
> 
> Previously, with only a few physical hosts, the number of
> platforms was fixed and services were split more or less
> on the basis of internal and external users. Not to
> mention which host had more available resources at the
> time the service was implemented.
> 
> For example, presently our primary DNS and the IMAP
> services runs on one server with MailScanner controlled
> Sendmail used only for local delivery and forwarding.  On
> another host we run the publicly accessible MX MTA and a
> secondary public DNS.  On a third we run our fax server
> and public web site together with a caching only DNS
> service.
> 
> I very much would appreciate any relevant comments from
> people who have already resolved this matter together with
> their reasoning.

My experience is that the limiting factor is disk I/O. If you're using
traditional platter drives, the VMs will span physically differing
sections of the disk(s). If the different VMs hit the disk at the same
time, the read/write heads will start flying all over the disks and the
seek latency will go up. This can quickly degrade performance to an
unusable level.

If you use SSDs, this issue goes away but then you need to pay special
attention that your host OS properly uses the SSDs (TRIM, garbage
collection, etc).

As for processing power; I used to pin VMs to CPU cores. I've since
abandoned this as I found the scheduler to do a great job and pushing
loads around cores to get best performance.

hth

-- 
Digimer
E-Mail:              digimer@xxxxxxxxxxx
Freenode handle:     digimer
Papers and Projects: http://alteeve.com
Node Assassin:       http://nodeassassin.org
"omg my singularity battery is dead again.
stupid hawking radiation." - epitron
_______________________________________________
CentOS mailing list
CentOS@xxxxxxxxxx
http://lists.centos.org/mailman/listinfo/centos


[Index of Archives]     [CentOS]     [CentOS Announce]     [CentOS Development]     [CentOS ARM Devel]     [CentOS Docs]     [CentOS Virtualization]     [Carrier Grade Linux]     [Linux Media]     [Asterisk]     [DCCP]     [Netdev]     [Xorg]     [Linux USB]
  Powered by Linux