Re: measuring most-expensive queries

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You could have made the following change in your conf file and reload
the postgresql server.

log_min_duration_statement = 10000

This will help you log the queries that take more than 10000 mili
seconds to execute in your postgresql sever log file

regards
Gourish Singbal


On 5/2/05, Enrico Weigelt <weigelt@xxxxxxxx> wrote:
> 
> Hi folks,
> 
> I'd like to find out which queries are most expensive (taking very
> long or producing high load) in a running system, to see what
> requires further optimization. (the application is quite large
> and some more folks involved, so I cant check evrything manually).
> 
> Well, the postmaster can log ev'ry single statement, but its
> really too for a human person, to read the log files.
> 
> Is there any tool for that ?
> 
> thx
> --
> ---------------------------------------------------------------------
> Enrico Weigelt    ==   metux IT service
>  phone:     +49 36207 519931         www:       http://www.metux.de/
>  fax:       +49 36207 519932         email:     contact@xxxxxxxx
> ---------------------------------------------------------------------
>  Realtime Forex/Stock Exchange trading powered by postgresSQL :))
>                                            http://www.fxignal.net/
> ---------------------------------------------------------------------
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>    (send "unregister YourEmailAddressHere" to majordomo@xxxxxxxxxxxxxx)
> 


-- 
Best,
Gourish Singbal


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux