Hi Kevin
Thanks for your response. I tried doing what you suggested so that table now has a primary key of ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); ' and I've added the INDEX of 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id );' unfortunately it hasn't resulted in an improvement of the query performance. Here is the explain http://explain.depesz.com/s/tDL I think there is no performance increase because its now not using primary key and just using the index on the data_id. Have I done what you suggested correctly? Any other suggestions?
Thanks very much for your help,
Thanks for your response. I tried doing what you suggested so that table now has a primary key of ' CONSTRAINT data_area_pkey PRIMARY KEY(area_id , data_id ); ' and I've added the INDEX of 'CREATE INDEX data_area_data_id_index ON data_area USING btree (data_id );' unfortunately it hasn't resulted in an improvement of the query performance. Here is the explain http://explain.depesz.com/s/tDL I think there is no performance increase because its now not using primary key and just using the index on the data_id. Have I done what you suggested correctly? Any other suggestions?
Thanks very much for your help,
Mark
On 5 April 2013 17:37, Kevin Grittner <kgrittn@xxxxxxxxx> wrote:
Mark Davidson <mark@xxxxxxxxxxx> wrote:So the only index on this 250 million row table starts with the ID
> CONSTRAINT data_area_pkey PRIMARY KEY (data_id , area_id ),
of the point, but you are joining to it by the ID of the area.
That's requires a sequential scan of all 250 million rows. Switch
the order of the columns in the primary key, add a unique index
with the columns switched, or add an index on just the area ID.
Perhaps you thought that the foreign key constraints would create
indexes? (They don't.)
--
Kevin Grittner
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company