On Mon, 2005-05-23 at 14:54, Thomas F. O'Connell wrote: > I have a web application backed by a PostgreSQL 7.4.6 database. It's > an application with a fairly standard login process verified against > the database. > > I'd like to use pg_dump to grab a live backup and, based on the > documentation, this would seem to be a realistic possibility. When I > try, though, during business hours, when people are frequently logging > in and otherwise using the application, the application becomes almost > unusable (to the point where logins take on the order of minutes). > > According to the documentation, pg_dump shouldn't block other > operations on the database other than operations that operate with > exclusive locks. Ordinarily, I run pg_autovacuum on the box, so I > tried again after killing that, thinking that perhaps any substantial > vacuum activity might affect pg_dump. I tried again to no avail. > > Excepting the rest of the application, the login process should be > completely read-only and shouldn't require any exclusive locks. > > Connections don't really pile up excessively, and load on the machine > does not get in the red zone. Is there anything else I should be > noticing? Basically, it sounds like postgresql is doing a lot of very long sequential scans to do this backup. HAve you done a vacuum full lately? It could be that you've got a lot of table bloat that's making the seq scans take so long. You could be I/O saturated already, and the backup is just pushing you over the edge of the performance knee. I do a 'vacuum analyze verbose' and see if you need more fsm setup for your regular vacuums to keep up. ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://archives.postgresql.org