I fear this has been asked many times about PostgreSQL, and I have read the docs about how indexes are supposed to be defined and used, but I don't understand why the engine and optimizer is doing what it does in the simplest of situations. Is it that its tuning is heavily data dependent? My case of interest is more complicated, but I decided to create a toy case to try to understand. Here it is: -- Table "foo" DDL CREATE TABLE "public"."foo"( "projectid" int4 NOT NULL , "uid" int4 NOT NULL , "name" varchar(254) NOT NULL , "ver" varchar(127) NOT NULL , "startdate" date NOT NULL , "enddate" date NOT NULL , "status" varchar(254) NOT NULL , "percentdone" numeric(7,2) NOT NULL , "championuid" int4 NOT NULL , "pmuid" int4 NOT NULL , PRIMARY KEY ("projectid") ) WITHOUT OIDS; -- Table "bignum" DDL CREATE TABLE "public"."bignum"( "thing" numeric(100) NOT NULL ) WITHOUT OIDS; CREATE INDEX "t" ON "public"."bignum" USING btree ("thing"); Running EXPLAIN ANALYZE SELECT A.* FROM bignum B, foo A WHERE A.projectid = B.thing; yields: Nested Loop (cost=0.00..15.51 rows=1 width=407) (actual time=0.041..0.041 rows=0 loops=1) Join Filter: ((a.projectid)::numeric = b.thing) -> Seq Scan on bignum b (cost=0.00..1.01 rows=1 width=16) (actual time=0.024..0.027 rows=1 loops=1) -> Seq Scan on foo a (cost=0.00..11.80 rows=180 width=407) (actual time=0.003..0.003 rows=0 loops=1) Total runtime: .169 ms ; Like *how* *come*? There are indexes on both columns of the join. Is it the NUMERIC datatype messing things up? Unlikely, as I've seen the same with INTEGERs. If it is data dependent (these tables are presently empty), any suggestions as to how to tune a database for unknown mixes of data? This is run on the Windows version of PG, but I'm seeing the same kind of thing on Linux. Thanks. ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend