On Tue, Dec 12, 2006 at 02:59:35PM -0800, Lin Shen (lshen) wrote: > The doc says it might work but is not recommended. But we do have a > business case for this. What type of problems I should be watching for > if I go this route? How hard is it to remove this limitation from > GFS/GNBD? This does work. Unfortunately, there are some disadvantages to it. 1. Any work you do that slows down the GNBD server, slows down all the GNBD clients that depend on that server. This is unavoidable. Since all the GNBD clients depend on block IO from the server, anything that keeps the server from responding quickly to GNBD requests slows the entire cluster. 2. The more things you do on the server, the more likely you are to have a crash, and when a GNBD server crashes, the entire cluster is impacted. If the GNBD device is not multipathed, your entire cluster will stall until the GNBD server is brought back online, because no IO can complete. If the device is multipathed, your entire cluster will stall until the server node can be fenced. To avoid data corruption it is necessary to power fence the GNBD server before dm-multipath can failover outstanding requests to another server. In short, doing this hurts overall cluster performance, and unless you have shared storage and run dm-multipath on top of GNBD, it makes your single point of failure (the gnbd server nodes) more likely to fail. Once cluster mirroring is available, it will be possible to run GFS on a GNBD server node that is mirroring its exported GNBD with GNBDs imported from other nodes. This will allow you to make clusters with no shared storage and no single point of failure. Obviously, your performance may be even worse under this setup, but it's an inexpensive setup with no single point of failure. -Ben > lin > > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster