On Thu, 31 Jan 2013 14:17:32 -0500 Jeff Darcy <jdarcy at redhat.com> wrote: > There is *always* at least one situation, however unlikely, where you're > busted. Designing reliable systems is always about probabilities. If > none of the solutions mentioned so far suffice for you, there are still > others that don't involve sacrificing the advantages of dynamic > configuration. If your network is so FUBAR that you have trouble > reaching any server to fetch a volfile, then it probably wouldn't do you > any good to have one locally because you wouldn't be able to reach those > servers for I/O anyway. You'd be just asking for split brain and other > problems. Redesigning the mount is likely to yield less benefit than > redesigning the network that's susceptible to such failures. You are asking in the wrong direction. The simple question is: is there any dynamic configuration equally safe than a local config file? If your local fs is dead, then you are really dead. But if it is alive you have a config. And that's about it. You need no working DNS, no poisoned cache and no special server that must not fail. Everything with less security is inacceptable. There is no probability, either you are a dead client or a working one. And if you really want to include the network as a question. I would expect the gluster client-server and server-server protocols to accept network failure as a default case. It wouldn't be useful to release a network filesystem that drops dead in case of network errors. If there is some chance to survive it should be able to do so and still work. Most common network errors are not a matter of design, but of dead iron. -- Regards, Stephan