On Thu, Jul 4, 2013 at 12:44 AM, denis bahati <djbahati@xxxxxxxxxxx> wrote: > Hi Brett, > > On my plan is as follows: > > I have two machine (Server) that will host two VM each. One for database and > one for application. Then the two machine will provide (Load Balance and > High availability). My intention is that all application files and data file > for the database should reside on the SAN storage for easy access and > update. Don't... do this. Two database clients writing to the same database filesystem back ends, simultaneously, is an enormous source of excited sounding flow charts and proposals which simply do not work and are very, very likely to corrupt your database beyond recover. These problems have been examined, for *decades* with shared home directories and saved email and for high performance or clustered databases that need to not have "split brain" skew, It Does Not Work. Set up a proper database *cluster* with distinct back ends. > Therefore the storage should be accessible to both VMs through mounting the > SAN storage to the VMs. The connection between SAN storage and the servers > is through Fiber Channel. Survey says *bzzzt*. See above for databases. For shared storage, you should really be using some sort of network based access to a filesystem back end. NetApp and EMC spend *billions* in research building high availability shared storage, and even they don't pull stunts like this the last I looked. I can vaguely imagine one of the hosts doing write access and the other having read-only access. But really, most databases today support good clustering configurations that avoid precisely these issues. > I have seen somewhere talking about DM-Multipath but i dont know if this can > help or the use of VT-d if can help. I will also appreciate if you provide > some links to give me insight of how to do this. Multipath does not mean "multiple clients of the same hardware storage". That's effectively like letting two kernels write to the same actual disk at the same time, and it's quite dangerous. Now, if you want each client to access their own fiber channel disk resource, that should be workable. Even if you have to mount the fiber channel resources on the KVM host, and make disk images for the KVM guest, that should at least get you a testable resource. But the normal approach is have a fiber channel storage server that makes disk images available via NFS, so that the guest VM's can be migrated from one server to another with the shared storage more safely. _______________________________________________ CentOS-virt mailing list CentOS-virt@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos-virt