Hi Community,
We
are analyzing performance behaviour with postgres database,
for INSERT operation for 3,00,000 records (On SLES 10 Linux, 64 bit server) postgreSQL is taking
around 150 min.
Here is some observation drawn from PostgreSQL performance testing:
There are around 310,000 records, it takes 8.313 milliseconds to
retrieval/search a record, and it takes 8.321 milliseconds
more to update a record. Totally it takes approximate of 17.9289
ms for searching and update during a Synchronisation operation (these timings are
recorded from postgres Embedded SQL modules).
This is very slow compared to Sybase for the same INSERT operation (with 3,00,000 students). There are around 300,000 update operations and it is taking approx 150 min as against 30 - 40 min at Sybase (when run the same operation/hardware).
However when checked at PSQL client, the queries are pretty faster, and the below is the explain PLANS of SELECT and UPDATE ((student_id is UNIQUE index, and id is primary key) :
This is very slow compared to Sybase for the same INSERT operation (with 3,00,000 students). There are around 300,000 update operations and it is taking approx 150 min as against 30 - 40 min at Sybase (when run the same operation/hardware).
However when checked at PSQL client, the queries are pretty faster, and the below is the explain PLANS of SELECT and UPDATE ((student_id is UNIQUE index, and id is primary key) :
EXPLAIN ANALYSE SELECT id from students where student_id='9101';
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Index Scan using student_ind2 on students(cost=0.00..8.38 rows=1 width=4) (actual time=0.096..0.097 rows=1 loops=1)
Index Cond: (student_id = 9101)
Total runtime: 0.111 ms
(3 rows)
WHERE student_id = '9101'
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
Update (cost=0.00..8.38 rows=1 width=64) (actual time=0.039..0.039 rows=0 loops=1)
-> Index Scan using student_ind2 on students (cost=0.00..8.38
rows=1 width=64) (actual time=0.016..0.017 rows=1 loops=1)
Index Cond: (student_id = 9101)
Total runtime: 0.080 ms
As per the above plans, the queries are using indexes as per our expectation.
Can anyone help us in improving the performance,
Shiva
---------- Forwarded message ----------
From: Shivakumar Ramannavar <shivasr@xxxxxxxxx>
Date: Mon, Oct 31, 2011 at 4:51 PM
Subject: My Query to insert and retrieve takes time
To: pgsql-admin@xxxxxxxxxxxxxx
Hi Community,
--
Shivakumar Ramannavar
From: Shivakumar Ramannavar <shivasr@xxxxxxxxx>
Date: Mon, Oct 31, 2011 at 4:51 PM
Subject: My Query to insert and retrieve takes time
To: pgsql-admin@xxxxxxxxxxxxxx
Hi Community,
Require some help, I am running postgres 9.1 database, which has table having around 300,000 records, my problem is that it takes 8.314 milliseconds to update one record, and the below is the explain PLAN of (terminal_id is UNIQUE index, and term_index is primary key)
EXPLAIN ANALYZE UPDATE terminal
SET unit_address=E'\\x00000015a3'
WHERE terminal_id = '320000'
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
Update (cost=0.00..8.38 rows=1 width=64) (actual time=0.074..0.074 rows=0 loops=1)
-> Index Scan using terminal_ind2 on terminal (cost=0.00..8.38 rows=1 width=64) (actual time=0.047..0.048 rows=1 loops=1)
Index Cond: (terminal_id = 320000)
Total runtime: 0.182 ms
There are around 300,000 update operations and it is taking approx 80 min, please let me know how to improve the performance.
Thanks!
Shivakumar Ramannavar
Shivakumar Ramannavar