Although I'm not an expert on this stuff, but 32 MB of shared buffers seems quite low to me, even for a windows machine. I'm running postgres 8.2 on my workstation with 2GB of ram and an AMD x64 3500+ with shared_buffer set to 256MB without any trouble an it's running fine, even on large datasets and other applications running. In my experience, shared_buffers are more important than work_mem. Have you tried increasing default_statistic_targets (eg to 200 or more) and after that running "analyze" on your tables or the entire database? Marc Christian Rengstl wrote: > Hi list, > > I have the following query: > select t.a1, t.a2 from table1 t inner join table2 s > using(id) where t.pid='xyz' and s.chromosome=9 order by s.pos; > > With the following output from analyze: > "Sort (cost=35075.03..35077.51 rows=991 width=14) (actual > time=33313.718..33321.935 rows=22599 loops=1)" > " Sort Key: s.pos" > " -> Hash Join (cost=7851.48..35025.71 rows=991 width=14) (actual > time=256.513..33249.701 rows=22599 loops=1)" > " Hash Cond: ((t.id)::text = (s.id)::text)" > " -> Bitmap Heap Scan on table1 t (cost=388.25..27357.57 > rows=22286 width=23) (actual time=112.595..32989.663 rows=22864 > loops=1)" > " Recheck Cond: ((pid)::text = 'xyz'::text)" > " -> Bitmap Index Scan on idx_table1 (cost=0.00..382.67 > rows=22286 width=0) (actual time=103.790..103.790 rows=22864 loops=1)" > " Index Cond: ((pid)::text = 'xyz'::text)" > " -> Hash (cost=7180.62..7180.62 rows=22609 width=17) (actual > time=143.867..143.867 rows=22864 loops=1)" > " -> Bitmap Heap Scan on table2 s (cost=333.00..7180.62 > rows=22609 width=17) (actual time=108.715..126.637 rows=22864 loops=1)" > " Recheck Cond: ((chromosome)::text = '9'::text)" > " -> Bitmap Index Scan on idx_table2 > (cost=0.00..327.35 rows=22609 width=0) (actual time=108.608..108.608 > rows=22864 loops=1)" > " Index Cond: ((chromosome)::text = > '9'::text)" > > My OS is Windows 2003 with 4GB Ram and Xeon Duo with 3.2 GHz; > shared_buffers is set to 32MB (as I read it should be fairly low on > Windows) and work_mem is set to 2500MB, but nevertheless the query takes > about 38 seconds to finish. The table "table1" contains approx. 3 > million tuples and table2 approx. 500.000 tuples. If anyone could give > an advice on either how to optimize the settings in postgresql.conf or > anything else to make this query run faster, I really would appreciate. > > > > > Christian Rengstl M.A. > Klinik und Poliklinik für Innere Medizin II > Kardiologie - Forschung > Universitätsklinikum Regensburg > B3 1.388 > Franz-Josef-Strauss-Allee 11 > 93053 Regensburg > Tel.: +49-941-944-7230 > > > > > ---------------------------(end of broadcast)--------------------------- > TIP 2: Don't 'kill -9' the postmaster > > > -- Marc Schablewski click:ware Informationstechnik GmbH ---------------------------(end of broadcast)--------------------------- TIP 2: Don't 'kill -9' the postmaster