On Sun, 6 Nov 2005, PostgreSQL wrote: > SELECT v_barcode, count(v_barcode) FROM lead GROUP BY v_barcode HAVING > count(*) > 1; > > This is a dual Opteron box with 16 Gb memory and a 3ware SATA raid > runing 64bit SUSE. Something seems badly wrong. > > GroupAggregate (cost=9899282.83..10285434.26 rows=223858 width=15) > Filter: (count(*) > 1) > -> Sort (cost=9899282.83..9994841.31 rows=38223392 width=15) > Sort Key: v_barcode > -> Seq Scan on lead (cost=0.00..1950947.92 rows=38223392 width=15) What do the plan look like in 8.0? Since it's so much faster I assume you get a different plan. > shared_buffers = 50000 > work_mem = 16384 > maintenance_work_mem = 16384 > max_fsm_pages = 100000 > max_fsm_relations = 5000 > wal_buffers = 32 > checkpoint_segments = 32 > effective_cache_size = 50000 > default_statistics_target = 50 The effective_cache_size is way too low, only 390M and you have a machine with 16G. Try bumping it to 1000000 (which means almost 8G, how nice it would be to be able to write 8G instead...). It could be set even higher but it's hard for me to know what else your memory is used for. I don't know if this setting will affect this very query, but it should have a positive effect on a lot of queries. work_mem also seems low, but it's hard to suggest a good value on it without knowing more about how your database is used. -- /Dennis Björklund ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org