On Wed, Jul 1, 2015 at 3:37 AM Lukasz Wrobel <lukasz.wrobel@xxxxxxxxxxxxxxxxxxxxx> wrote:
Hello.I have multiple problems with my database, the biggest of which is how to find out what is actually wrong.First of all I have a 9.3 postgres database that is running for about a month. Right now the queries on that database are running very slowly (select with a simple "where" on a non-indexed column on a table with about 5000 records takes 1,5s, a complicated hibernate select with 7 joins on tables of about 5000 records takes about 15s, insert or update on a table with 35000 records takes up to 20 mins).The tables and indexes on those tables are bloated to the point where this query: https://wiki.postgresql.org/wiki/Show_database_bloat shows wasted bytes in hundreds of MB.For whatever reason there is also no data in pg_stat* tables.
Make sure that your tracking parameters are on-
track_counts and track_activities
So due to the long query times, there are multiple errors in my application logs like "No free connection available" or "Could not synchronize database state with session", or "Failed to rollback transaction" and the application fails to start in the required time.
One of things you can do is to set statement timeout in PostgreSQL configuration (but that may actually increase your problems by cancelling long running queries which seems to be too many in your case).
The only thing that helps fix the situation seems to be vacuum full of the entire database. Regular vacuum doesn't even lower the dead tuples count (which appear by the thousands during application launching).
Though I am not very sure but to me it seems this could be because your track_counts and track_activities is not set to on. Since your are not tracking them they are not being updated at all.
try this-
vacuum analyze a table
vacuum analyze schema_name.table_name;
reindex one of that table
reindex table schema_name.table_name;
Reindex of all the indexes in the database didn't help as well. All autovacuum parameters are default.
Did you analyze the database tables? Since your track_count is off (I have assumed based on your above statements) your database tables might never have been analyzed which could be leading to wrong/sub-optimal plans.
There doesn't seem to be any issues with disk space, memory or CPU, as neither of those is even 50% used (as per df and top).Is there any good tool that will monitor the queries and generate a report with useful information on what might be the problem? I tried pg_badger, but all I got were specific queries and their times, but the long query times are just one of the symptoms of what's wrong with the database, not the cause.Perhaps I'm missing some indexes on the tables (creating them on the columns on which the where clause was used in the long queries seemed to halve their times).
Yes, if you create indexes then certainly those will be helpful depending on the volume of data in that table.
Also how can I monitor my transactions and if they are closed properly?
Check pg_stat_activity view. There is a column for state of the connection check there are too many connections in <IDLE in transaction> state. This means a connection has initiated a transaction but has not committed it yet.
You can combine the state with status change time (state_change) column-
select * from pg_stat_activity where
now()-state_change>'1 min'::interval and
state='idle in transaction';
This will list all those sessions which have not committed for last one minute.
You can look at using pgBouncer to effectively manage your sessions and connections.
I will be grateful for any help and if you need more details I can provide them if possible.Best regards.Lukasz