>> I'm not sure if I'm heading up the right alley - seems too simple!,
but here is my issue.
>>
>> I have about 3000 records in 'mytable', and simple
>
>Number of rows is not the most important thing here - the number of
>occupied disc pages is (you can have a lot of small rows or a small
>nubmer of large rows occupying the same space).
This table contains two varchar fields, that's all
>> select * from mytable where x=1
>>
>> is timed as:
>>
>> Total query runtime: 2933 ms.
>> Data retrieval runtime: 791 ms.
>>
>> EXPLAIN says it's Seq Scan, but the actual filter expression I'm
using returns me all rows from the table anyway. If I run
>>
>> select * from mytable
>>
>> Total query runtime: 3444 ms.
>> Data retrieval runtime: 771 ms.
>
> Please post here EXPLAIN ANALYZE output for these, it's difficult to
guess the cause without it.
QUERY PLAN
Seq Scan on mytable (cost=0.00..56.23 rows=2898 width=19) (actual
time=0.012..5.762 rows=2898 loops=1)
Filter: ((user_id)::text = 'test2'::text)
Total runtime: 10.014 ms
>> At the same time:
>>
>> select * into x from prl_user_entities
>
> Is this a different table or just a mistype?
Typo. Sorry.
>
>> Query returned successfully with no result in 600 ms.
>>
>> Why SELECT takes 3+ second to execute? Is it something to do with my
Postgres server optimization, or PgAdmin does not show correct data
retrieval runtime (leaks over into query runtime or something)?
>
> As someone already poitned out, this overhead is probably caused by
fact that the data have to be transmitted to the client in the first
case, but with 'SELECT INTO' almost no data are sent over the connection
(it all happens in the server).
I assumed the same thing. However, 'data retrieval runtime' as reported
by PgAdmin is really small compared to 'query runtime'... I would expect
it to be other way around
Thanks!
Peter