Tom Lane wrote:
David Rysdam <drysdam@xxxxxxxxxx> writes:
Right now, I'm working on a test case that involves a table with ~360k
rows called "nb.sigs". My sample query is:
select * from nb.sigs where signum > 250000
With no index, explain says this query costs 11341. After CREATE INDEX
on the signum field, along with an ANALYZE for nb.sigs, the query costs
3456 and takes around 4 seconds to return the first row. This seems
extremely slow to me, but I can't figure out what I might be doing
wrong. Any ideas?
How many rows does that actually return, and what client interface are
you fetching it with? libpq, at least, likes to fetch the entire query
result before it gives it to you --- so you're talking about 4 sec to
get all the rows, not only the first one. That might be reasonable if
you're fetching 100k rows via an indexscan...
regards, tom lane
Right, it's about 100k rows and it is through libpq (pgadmin in this
case, but my app uses libpq from pgtcl). Is there a way to tell libpq
to not do what it "likes" and do what I need instead? I didn't see
anything in the docs, but I didn't look very hard.
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend