--- lists@natserv.com wrote: > On Tue, 10 Feb 2004, CSN wrote: > > > I have a pretty simple select query that joins a > table > > (p) with 125K rows with another table (pc) with > almost > > one million rows: > > > > select p.* > > from product_categories pc > > inner join products p > > on pc.product_id = p.id > > where pc.category_id = $category_id > > order by p.title > > limit 25 > > offset $offset > > > > The query usually takes about five seconds to > execute > > (all other PG queries perform fast enough). I have > > indexes on everything needed, and EXPLAIN shows > > they're being used. Is there anything else I can > do to > > improve performance - such as tweaking some > settings > > in the config? > > > > Redhat 9, PG 7.4.1. > > Could you give more info on the hardware? Intel(R) Celeron(R) CPU 1.70GHz 1 GB RAM > You did not mention how often you do your vacuum > analyze or how often data > is updated/deleted. I've done both vaccuum and vaccuum analyze on the database. Vaccuum full takes forever (I haven't let it run its full course yet). The data is completely static - no updates/deletes, just selects. > How about your buffer and other settings? shared_buffers = 1000 That's about the only setting I changed in postgresql.conf TIA, CSN The more info you provide the > more we can try to > help. > > How about your buffer and other settings? __________________________________ Do you Yahoo!? Yahoo! Finance: Get your refund fast by filing online. http://taxes.yahoo.com/filing.html ---------------------------(end of broadcast)--------------------------- TIP 7: don't forget to increase your free space map settings