Aaron Lippold wrote:
Hi All, Reposted this since it got burried in another posing conversation....- I am going to be putting together a new infrastructure for the open source group at my organization and I was thinking about setting up a virt environment for my production services. Webserver, development/project environment - i.e. svn, bugs, tickets, etc. most likely trac or source forge - a gnu mailman service, etc. I was thinking that, if I did this, it might be smart to double up in a kind of virt HA setup. Do you know anyone who has setup this type of thing and how they are succeeding with it.
At my last company I set up such a system and it has been running for over 18 months now. (It was based on a very early Xen 3.0, upgraded several times since). Also plenty of hosting companies are running Xen. For example, http://www.bytemark.co.uk/page/Live/support/tech/dedicated/xen_setup to name one of many.
We found virtualisation made a great solution for isolating different servers from each other. We had previously had a problem where our Apache required so many different modules to support the various different services we were running that the Apache config became extremely fragile. No such problem with Xen however since we just ran different Apache instances on different guests, with a web accelerator in front of them (a stripped down Apache with mod_proxy) so that all the backend Apaches would appear on the same IP address. You could reboot a backend server easily if it failed without affecting any other production service.
Also, for my testing and pre-deployment I was going to use another virt env. This most likely wouldn't have to be HA. I, of course, was going to use virt-manager to help manage this setup. Thoughts? Suggestions?
Of course. You may also find that the command line tool (virsh) gives you more flexibility to script actions. And you can use libvirt directly for automation. The advantage of using libvirt is that you aren't tying yourself to Xen.
Specifically what level of hardware would really be needed for a medium load - 10-15 projects - 100 general users etc. I was thinking about a 2 cpu dual or quad core setup with a SAN for the virt images etc. This would be the first deployment of an infrastructure which was virt rather than individual systems. Thoughts?
I think one of the mistakes people make is to think that virtualisation magically enables you to save resources. Like, I can P2V this big stack of servers onto a single box and everything will run fine. It's not the case.
Xen doesn't overcommit physical RAM, so if each server requires 1 GB of RAM, and you have 8 servers, then you will need 8 * 1 GB of RAM plus a bit extra for the management domain (dom0), so in practice at least 8.5 GB of RAM.
Each operating system install needs just as much disk space as you would normally allocate. So again you will end up multiplying the space required by each OS, by the number of guests.
You have a little more flexibility with CPU because you can overcommit here. If your CPU requirements are low (most boxes are idle most of the time) then you'll be fine, but if you expect that some boxes will consume significant CPU, then you can pin those to dedicated cores and have extra cores left over to handle the rest of the work.
Oh and for HA, you'll need two of these boxes at least since if a Xen server fails it could literally take out your whole business. SANs help here because they allow you to migrate workloads around easily.
Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
_______________________________________________ et-mgmt-tools mailing list et-mgmt-tools@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/et-mgmt-tools