Sven Willenberger presumably uttered the following on 07/07/06 13:52: > On Fri, 2006-07-07 at 10:41 -0700, Joshua D. Drake wrote: >>>> Sincerely, >>>> >>>> Joshua D. Drake >>> Doing a quick check reveals that the relation in question currently >>> consumes 186GB of space (which I highly suspect is largely bloat). >> Good lord.. .186 gig for a 300 million row table? Unless those are seriously >> large rows, you have a TON of bloat. >> >> Joshua D. Drake >> > > Yes, that number came from the dbsize functions (in contrib) so I don't > know if that includes the associated indexes as well. The rows are > fairly large, yes, but not enough (IMO) to account for that size. It > will be interesting to see the final size after the vacuum full (which > is the method I have settled on to reclaim space this go round). > > Sven > > > ---------------------------(end of broadcast)--------------------------- > TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not > match Unfortunately it would appear that I cannot vacuum full either as I get an out of memory error: # - Memory - shared_buffers = 5000 # min 16, at least max_connections*2, 8KB each work_mem = 131072 # min 64, size in KB maintenance_work_mem = 524288 # min 1024, size in KB max_stack_depth = 4096 # min 100, size in KB /boot/loader.conf kern.maxdsiz="1610612736" kern.dfldsiz="891289600" I have disabled other connections to the db except for slony (which will not access the table in question). I begin a Vacuum full <tablename> and start watching memory use constantly increase (top). It pushes to: PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU COMMAND 61048 pgsql -4 0 1640M 1472M getblk 1 6:58 16.75% 16.75% postgres and then it bails: ERROR: out of memory DETAIL: Failed on request of size 78. Server version is 8.03 Is this a known issue? Sven