We have an interesting problem here. We have a server at a customer's site on which the database will not come up. Because of the nature of the product we make, we don't turn on Postgresql logs, so no log data is avaliable. What we see is that when we start postmaster it starts, but anyone who tries to connect gets a message "FATAL: Database is starting up". This continues for about 5 minutes at which point a watchdog process kills the postmaster (SIGQUIT) and restarts it. This process is repeating itself over and over again on the system. In a attempt to find this problem the watchdog process is killed and postmaster is started manually. The results are: LOG: could not load root certificate file "/slice2/url_db/root.crt": No such file or directory DETAIL: Will not verify client certificates. LOG: database system was interrupted while in recovery at 2006-01-24 12:32:10 PST HINT: This probably means that some data is corrupted and you will have to use the last backup for recovery. LOG: checkpoint record is at 0/26D6D44 LOG: redo record is at 0/25020F0; undo record is at 0/0; shutdown FALSE LOG: next transaction ID: 774; next OID: 107254 LOG: database system was not properly shut down; automatic recovery in progress LOG: redo starts at 0/25020F0 FATAL: the database system is starting up PANIC: btree_insert_redo: failed to add item LOG: startup process (PID 32622) was terminated by signal 6 LOG: aborting startup due to startup process failure One other clue is available. There are 1202 files in the pg_xlog directory. One thought is that we should shutdown the database with a SIGINT instead of a SIGQUIT. It should be noted however that frequently our customers shutdown the system with the power switch, so our ability to control the shutdown is limited. We would like any information or suggestions on: 1) What's happening. 2) How can stop it from happening. 3) How can we detect when we are in such a state (so we can rebuild the database)