Consider a service based on several terabytes of data, suffering from a very high load. Can postgresql be used as database backend in a cluster system in which both of the following is achieved? a) Load balancing among a (large) number of cluster nodes b) High availability (if any node in the cluster melts down, another one takes over) ClusterDB [1] is one solution, but it seems that both a) and b) is achieved by simply multiplying the database installations and distributing the load "cleverly" among these installations. This is problematic for two reasons: * Having N terabytes of data, and a need for 10 cluster nodes in order to achieve a sufficient load balancing would require 10*N terabytes of storage. This kind of waste of storage. * All queries are isolated to a single node instead of beeing parallelized into multiple nodes in order to gain performance. Can postgresql be setup in a clustering environment with N "computing nodes" using some sort of shared file system among them (such as lustre [2]) and still achive HA and proper load balancing? [1] http://pgcluster.projects.postgresql.org [2] http://www.lustre.org