I am not an expert to postgres internals but in General: if you have a btree multicolumn index on the 4 fields then it should take around logF(4M). lets guess the F to be 5 so its around 10 ios +1 block read. for the same thing for a hashtable its about the same or less. if you have any subset of the fields indexed with a btree it costs logF(4M) + all the blocks with those subset which is still better than a sequential scan. another possibility which requires careful analyze of the frequencies is intersecting all the rows from the 4 separate indices and finding 1 that matches. In any case, when in doubt run the EXPLAIN on your query. see the documentation. Regards, tzahi. > -----Original Message----- > From: pgsql-general-owner@xxxxxxxxxxxxxx > [mailto:pgsql-general-owner@xxxxxxxxxxxxxx] On Behalf Of > Postgres Coder > Sent: Friday, February 25, 2005 1:46 AM > To: pgsql-general@xxxxxxxxxxxxxx > Subject: Which query is less expensive / faster? > > > Hi all, > > I have a couple different ways to get results out of my table > structure, but I'm unsure which way is faster or less expensive to > run: > > Option 1: Query a table of 4 million rows, on 4 indexed > columns. It will return 1 row: > > SELECT field1, field2, field3, field4 > FROM tablea > WHERE field1 = $1 AND field2 = $2 AND field3 = $3 AND field4 = $4 > > Option 2: Query a table of 200,000 rows on 1 indexed column. > It will return 800 rows: > > SELECT * > FROM tableb > WHERE field1 = $1 > > Which one is going to return results the fastest, with the > least expense to the database server? > > ---------------------------(end of > broadcast)--------------------------- > TIP 8: explain analyze is your friend > > ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to majordomo@xxxxxxxxxxxxxx)