I have an application for which data is being written to many disks simultaneously. I would like to use a postgres table space on each disk. If one of the disks crashes it is tolerable to lose that data, however, I must continue to write to the other disks. My specific concerns are: 1. There is a single WAL log for the entire cluster, located in the pg_log subdirectory. If the disk containing the pg_log file crashed, does that mean the system would come to a halt. Is there anyway to distribute this data so that WAL is located on the same media as the table space? An alternative would be to use raid with the disk that stores the pg_log subdirectory but that adds cost to the system. 2. If #1 was solved by using the raid approach, what happens if one of the disks containing one of my table spaces crashes. At some point postgres will want to write the data from the WAL file to the crashed (unavailable) disk. Will postgres will be blocked at this point? Is there some way to notify postgres that a specific disk is no longer available and that the entries in the WAL for this disk should either be purged or ignored? ( I’m willing to “throw away” the data on the crashed disk). Clearly using raid on all of the disks would be a solution, but that is cost prohibitive. Thanks for your help Paul |