Hi,
I have already checked all the statements present in the trigger, no one is taking more then 20 ms.
I am using 8-Processor, Quad-Core Server ,CPU utilization is more then 90-95 % for all. (htop result)
DB has 960 concurrent users.
io : writing 3-4 MB per second or less (iotop result).
Scenario : All insert are waiting for previous insert to complete. Cant we avoid this situation ?
What is the "max_connections" postgresql support?
Plz help....
--
Thanks & regards,
JENISH VYAS
Thanks & regards,
JENISH VYAS
On Mon, Jun 27, 2011 at 6:32 PM, <tv@xxxxxxxx> wrote:
> Hi,Hi,
>
> DB : POSTGRES 8.4.8
> OS : Debian
> HD : SAS 10k rpm
>
> Shared_buffer is 4096 25 % of RAM , effective_cache is 8GB 75% of RAM
>
> After insert trigger is again calling 2 more trigger and insert record in
> another table depends on condition.
>
> with all trigger enable there are 8 insert and 32 updates(approx. update
> is
> depends on hierarchy)
it's very difficult to give you reliable recommendations with this little
info, but the triggers are obviously the bottleneck. We have no idea what
queries are executed in them, but I guess there are some slow queries.
Find out what queries are executed in the triggers, benchmark each of them
and make them faster. Just don't forget that those SQL queries are
executed as prepared statements, so they may behave a bit differently than
plain queries. So use 'PREPARE' and 'EXPLAIN EXECUTE' to tune them.
PostgreSQL does not support parallel queries (i.e. a query distributed on
> Plz explain multiple connections. Current scenario application server is
> sending all requests.
multiple CPUs) so each query may use just a single CPU. If you're CPU
bound (one CPU is 100% utilized but the other CPUs are idle), you can
usually parallelize the workload on your own - just use multiple
connections.
But if you're using an application server and there are multiple
connections used, this is not going to help you. How many connections are
active at the same time? Are the CPUs idle or utilized?
Tomas