Brian Hirt wrote: > I've having a strange issue with pg_autovacuum. I have a table with > about 4 million rows in 20,000 pages. autovacuum likes to vacuum > and/or analyze it every 45 minutes or so, but it probably doesn't have > more that a few hundred rows changed every few hours. when i run > autovacuum with -d3 it says > > [2004-05-18 07:04:26 PM] table name: > basement_nightly."public"."search_words4" > [2004-05-18 07:04:26 PM] relid: 396238832; relisshared: 0 > [2004-05-18 07:04:26 PM] reltuples: 4; relpages: 20013 > [2004-05-18 07:04:26 PM] curr_analyze_count: 0; cur_delete_count: > 0 > [2004-05-18 07:04:26 PM] ins_at_last_analyze: 0; > del_at_last_vacuum: 0 > [2004-05-18 07:04:26 PM] insert_threshold: 504; > delete_threshold 1008 > > reltuples: 4 seems wrong. I would expect a table with 4m rows and 20k > pages to have more than 4 tuples. I think this is why the insert > threshhold is all messed up -- which is why it gets analyzed way too > frequently. > > this happens with other big tables too. the autovacuum is from 7.4.2, > some information is below. Oh, 7.4.2. I know we have some known bug and are waiting on a patch for it. Matthew, we need those fixes for pg_autovacuum soon. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073 ---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match