On Tue, 2007-02-27 at 13:23, Jeff Davis wrote: > Also, put things in context. The chances of failure due to these kinds > of things are fairly low. If it's more likely that someone spills coffee > on your server than the UPS fails, it doesn't make sense to spend huge > amounts of money on NVRAM (or something) to store your data. So identify > the highest-risk scenarios and prevent those first. > > Also keep in mind what the cost of failure is: a few hundred bucks more > on a better RAID controller is probably a good value if it prevents a > day of chaos and unhappy customers. Just FYI, I can testify to the happiness a good battery backed caching RAID controller can bring. I had the only server that survived a complete power grid failure in the data center where I used to work. A piece of wire blew out a power conditioner, which killed the other power conditioner, all three UPSes and the switch to bring the diesel generator online. the only problem the pgsql server had coming back up was that it had remote nfs mounts it used for file storage that weren't able to boot up fast enough so we just waited a few minutes and rebooted it. All of our other database servers had to be restored from backup due to massive data corruption because someone had decided that NFS mounts were a good idea under databases.