On 10/7/07, Darren Reed <darrenr+postgres@xxxxxxxxxxxx> wrote: > Scott Marlowe wrote: > > On 10/7/07, Darren Reed <darrenr@xxxxxxxxxxxx> wrote: > > > Scott Marlowe wrote: > A few days ago I did: > pg_dumpall > foo > What I was doing yesterday was: > rm -rf /data/db/* > initdb -D /data/db > start > psql < foo > run for some period > stop > reboot > start > ...tables have gone but disk space is still in use. > I dont know if it was during the period of running that the > database got corrupted (interrupted insert/update/query?) > or what happened. Are you sure postgresql was starting up in the /data/db directory after reboot and not somewhere else like /var/lib/pgsql/data??? IF you're definitely hitting the right directory, then Is the database shutting down cleanly on reboot? It might be that it's getting killed during a write and you've got some kind of problem with fsync on your machine so the db is getting corrupted > > Can you be more specific on what exact query causes the problem to show up? > > > > It turned out that _any_ query on that table caused the problem to show up. > > I couldn't even do "DROP TABLE ifl;" without postgres growing until it > ran out of memory. definitely sounds like some kind of issue other just the size of the table, like some kind of corruption. > > So in the end, I wiped it clean and reloaded the data - this time > bounding all of the > work with BEGIN/COMMIT. So far things are looking better. All of the > data I've > been building the tables with is elsewhere, so I can reconstruct it. > Maybe adding > BEGIN/COMMIT makes no difference to not using them before, but I'm curious > to see if it does. Ideally I'd like to get to a place where I don't > need to use vacuum > at all. If nothing else, wrapping your load and building in begin;commit; should make it faster. > > Have you been analyzing your data before you start working on it? > > > > No. You should definitely run analyze after updating the table. It helps the query planner make the optimal choice for query plans. > > > Can we see your postgresql.conf file? > > > > Sure, I've attached it. > I've also run with the "default" .conf file without tuning it (down.) > > Darren > > Nothing odd here: > shared_buffers = 2000 # min 16 or max_connections*2, 8KB each > temp_buffers = 200 # min 100, 8KB each > max_prepared_transactions = 5 # can be 0 or more > # note: increasing max_prepared_transactions costs ~600 bytes of shared memory > # per transaction slot, plus lock space (see max_locks_per_transaction). > work_mem = 4096 # min 64, size in KB > maintenance_work_mem = 8192 # min 1024, size in KB > max_stack_depth = 400 # min 100, size in KB > > # - Free Space Map - > > max_fsm_pages = 20000 # min max_fsm_relations*16, 6 bytes each > max_fsm_relations = 200 # min 100, ~70 bytes each > This: > effective_cache_size = 1000 # typically 8KB each is a little low, but that's not a huge deal. effective cache size doesn't allocate anything, it just tells the planner about how much memory the OS is using to cache your database. And I don't see anything else in your postgresql.conf that looks suspicious. I'm leaning towards possible pilot error in shutting down or starting up the db. ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings