On 07/12/2010 9:29 PM, Tom Polak wrote:
From EXPLAIN ANALYZE I can see the query ran much faster.
"Nested Loop Left Join (cost=0.00..138.04 rows=1001 width=1298) (actual
time=0.036..4.679 rows=1001 loops=1)"
" Join Filter: (pgtemp1.state = pgtemp2.stateid)"
" -> Seq Scan on pgtemp1 (cost=0.00..122.01 rows=1001 width=788)
(actual time=0.010..0.764 rows=1001 loops=1)"
" -> Materialize (cost=0.00..1.01 rows=1 width=510) (actual
time=0.000..0.001 rows=1 loops=1001)"
" -> Seq Scan on pgtemp2 (cost=0.00..1.01 rows=1 width=510)
(actual time=0.006..0.008 rows=1 loops=1)"
"Total runtime: 5.128 ms"
The general question comes down to, can I expect decent perfomance from
Postgresql compared to MSSQL. I was hoping that Postgresql 9.0 beat MSSQL
2000 since MS 2000 is over 10 years old.
So postgres actually executed the select in around 5 miiliseconds.
Pretty good I would say. The problem therefore lies not with postgres
itself, but what is done with the results afterwards? Assuming that this
is pure local and therefore no network issues, perhaps there is a
performance issue in this case with the Npgsql driver? Someone who knows
more about this driver could perhaps shed some light on this?
I have used .NET (C#) with postgres before, but only using the odbc
driver. Perhaps you could try that instead (using OdbcCommand,
OdbcDataReader etc.).
I mainly use ruby (jruby) with postgres both under linux and Windows,
but I can certainly process 1000 records of similar structure in well
under 1 second.
Cheers,
Gary.
--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance