On Mon, 2010-11-08 at 16:23 -0700, Scott Marlowe wrote: > On Mon, Nov 8, 2010 at 3:42 PM, Jason Long <jason@xxxxxxxxxxxxxxxx> wrote: > > On Mon, 2010-11-08 at 14:58 -0700, Scott Marlowe wrote: > >> On Mon, Nov 8, 2010 at 11:50 AM, Jason Long <jason@xxxxxxxxxxxxxxxx> wrote: > >> > I currently have Postgres 9.0 install after an upgrade. My database is > >> > relatively small, but complex. The dump is about 90MB. > >> > > >> > Every night when there is no activity I do a full vacuum, a reindex, > >> > >> One question, why? > >> > >> > and then dump a nightly backup. > >> > >> Good idea. > >> > >> > Is this optimal with regards to performance? autovacuum is set to the > >> > default. > >> > >> that depends very much on your answer to the question of why are you > >> doing it and what you're trying to gain / work around with vacuum full > >> / reindex every night. > >> > > > > I have been doing this for several years. Since my database is small > > and it takes little time to do a full vacuum. I am doing the reindex > > because I thought that was recommended after a full vacuum. > > Definitely reindex after a full vacuum on previous versions (i.e. > before 9.0) I think with 9.0 vacuum full is like a cluster without any > reordering, so it likely doesn't need reindexing, but I've not played > with 9.0 much yet. > > > As the data has grown the system is slowing down. Right now I am > > looking at ways to improve performance without getting into the queries > > themselves because I am swamped with new development. > > OK, so it's a standard maintenance procedure you've been doing for a > while. That doesn't really explain why you started doing it, but I > can guess that you had some bloat issues way back when and vacuum full > fixed them, so doing it got kind of enshrined in the nightly > maintenance. Exactly. > > > Is doing the full vacuum and reindex hurting or helping anything? > > It might help a small amount if you've got regular usage patterns. If > you routinely update whole tables over and over then it might be > helping. I rarely update whole tables. > > > Any other quick fixes that I can try? > > Increasing work_mem, shared_buffers, changing random_page_cost and / > or seq_page_cost. I did up those at one point, but saw little improvement. I will reinvestigate. > > Log long running queries and run explain analyze on any that show up very often. > > But for real performance, you do often have to "get into the queries" > because an inefficient query may be something you can cut down to > 1/10000th the run time with a simple change, and often that change is > impossible to make by tuning the db, only the query can be tuned. It > might be something simple like you need to cast a type to match some > other type. Hard to say without looking. > > When a 90Meg database is slow, it's almost always poorly written / > non-optimized queries at the heart of it. I have no doubt that poorly written and non-optimized queries are at the heart of it. Stupid developer I'll have to fire that lazy bastard... Oh wait that's me. ;) I am going to start using auto_explain and logging long running queries. Also time to learn how to read query plans. So far I have gotten by by throwing faster hardware at the problem. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general