I'm an ex-ora guy, also. We will see more and more of us here as oracle corp becomes less cooperative in their pricing demands. What I'm saying here is directly from my oracle performance tuning history and Milsap and Holt. With all due respect (seriously), I'm playing catch up here with postgresql, and I consider any help a benefit. However, during the 1990s it was very popular to "tune" oracle db performance as you have described. People who know cool knobs to turn were considered wizards and knocked down big consulting bucks in this process: see slow app change stuff (how did they know about that knob?) run app again repeat as needed I'm not saying what you advise will not help performance. Lots of ad hoc kinds of changes "might" be helpful. However, we learned years ago from great db engineering minds a better method. The wisdom goes something like this: "Reliable problem diagnosis cannot proceed unless the data collection phase produces response time data for exactly the right time scope and exactly the right action scope." - Millsap, Holt and "If you can't measure it, you can't manage it. - Peter Drucker That said, I've yet to find a trace tool within pg that provides system wide response time data, ala the oracle wait interface. I don't know if the internal instrumentation is included in the pg kernel. A. Kretschmer was kind enough to point to "explain analyze," that is like ora explain plan including "total elapsed time expended within each plan node." Better than the nothing I had last week. Far from what I've been spoiled by. Perhaps there is some way to hook strace into the db kernel, but the results, how you would profile that is way beyond me. If on solaris 10, dtrace has great os level instrumentation that might come in handy. One underlying concept in the method is: "Work first to reduce the biggest response time component of a business' most important user action." - Millsap, Holt Any help on how to accomplish the detail of this method would be greatly appreciated. Thanks. On Wed, Aug 30, 2006 at 11:45:06AM -0700, Jeff Frost wrote: > On Wed, 30 Aug 2006, Joe McClintock wrote: > > >I ran a vacuum, analyze and reindex on the database with no change in > >performance, query time was still 37+ sec, a little worse. On our test > >system I found that a db_dump from production and then restore brought the > >database back to full performance. So in desperation I shut down the > >production application, backed up the production database, rename the > >production db, create a new empty production db and restored the > >production backup to the empty db. After a successful db restore and > >restart of the web application, everything was then up and running like a > >top. > > Joe, > > I would guess that since the dump/restore yielded good performance once > again, a VACUUM FULL would have also fixed the problem. How are your FSM > settings in the conf file? Can you run VACUUM VERBOSE and send us the last > 10 or so lines of output? > > A good article on FSM settings can be found here: > > http://www.pervasive-postgres.com/instantkb13/article.aspx?id=10087&cNode=5K1C3W > > You probably should consider setting up autovacuum and definitely should > upgrade to at least 8.0.8 if not 8.1.4 when you get the chance. > > When you loaded the new data did you delete or update old data or was it > just a straight insert? > > -- > Jeff Frost, Owner <jeff@xxxxxxxxxxxxxxxxxxxxxx> > Frost Consulting, LLC http://www.frostconsultingllc.com/ > Phone: 650-780-7908 FAX: 650-649-1954 > > ---------------------------(end of broadcast)--------------------------- > TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not > match