Hi, I’m trying to evaluate PostgreSQL as a database that
will have to store a high volume of data and access that data frequently. One
of the features on our wish list is to be able to use stored procedures to
access the data and I was wondering if it is usual for stored procedures to
perform slower on PostgreSQL than raw SQL? A simple example of this can be shown with the following
commands: First I created a test table: CREATE TABLE test ( id int8, name varchar(128), description varchar(500), constraint “pk_test”
primary key (id) ); Then the function I want to test: CREATE OR REPLACE FUNCTION readTest() RETURNS SETOF test AS $$ DECLARE row
test%ROWTYPE; BEGIN FOR
row IN SELECT * FROM test RETURN
NEXT row; END
RETURN; END; $$ LANGUAGE plpgsql; Firstly, I ran EXPLAIN on the raw SQL to see how long that
takes to access the database the results are as follows: EXPLAIN ANALYZE SELECT * FROM test; Seq Scan on test (cost=0.00..10.90 rows=90 width=798)
(actual time=0.003..0.003 rows=0 loops=1) Total runtime: 0.074 ms (2 rows) Secondly, I ran EXPLAIN on the function created above and
the results are as follows: EXPLAIN ANALYZE SELECT * FROM readTest(); Function Scan on readtest (cost=0.00..12.50 rows=1000 width=798)
(actual time=0.870..0.870 rows=0 loops=1) Total runtime: 0.910 ms (2 rows) I know that the function is planned the first time it is
executed so I ran the same command again to remove that processing from the
timings and the results are as follows: EXPLAIN ANALYZE SELECT * FROM readTest(); Function Scan on readtest (cost=0.00..12.50 rows=1000 width=798)
(actual time=0.166..0.166 rows=0 loops=1) Total runtime: 0.217 ms (2 rows) Event with the planning removed, the function still performs
significantly slower than the raw SQL. Is that normal or am I doing something wrong
with the creation or calling of the function? Thanks for your help, Simon
Visit our Website at www.rm.com
|