On Thu, Jul 15, 2010 at 1:04 AM, Andrew Bartley <ambartley@xxxxxxxxx> wrote: > Thanks to all that replied, > I used Joe Conway's suggestion, using grep and an extracted list of tables, > functions and views form the DB. It worked very well. > > I will attach the code I used to this thread once complete. > Again Thanks > Andrew Bartley > > On 14 July 2010 00:43, Greg Smith <greg@xxxxxxxxxxxxxxx> wrote: >> >> Andrew Bartley wrote: >>> >>> It seems that the underlying stats tables are reset on a periodic basis, >>> can i stop this process? Is it a .conf setting? >> >> Up until PostgreSQL 8.2 there's a setting named >> stats_reset_on_server_start that clears everything when the server stops: >> http://www.postgresql.org/docs/8.2/static/runtime-config-statistics.html >> >> If you're on that version or earlier and it's turned on, there's your >> problem. This went away in 8.3. >> >>> Also i need to find similar information regarding functions and views.... >>> Any suggestions? >> >> Some suggestions already popped up here for functions. Views are tougher >> because they essentially work like a macro substitution: the content of the >> view gets substituted into the query where it appears, and off the query >> planner goes. That's why there's no statistics about them, they don't >> actually exist as objects that things are executed against. I don't know of >> any way to track their use other than to log all your queries and look for >> them popping up. A grep against the application source code for them can be >> useful too. >> >> The flip side to that is that eliminating views doesn't really improve >> performance, so it's rarely a top priority to get rid of them--unlike unused >> indexes for example. >> -- >> Greg Smith 2ndQuadrant US Baltimore, MD >> PostgreSQL Training, Services and Support >> greg@xxxxxxxxxxxxxxx www.2ndQuadrant.us >> > > After some time of looking for a solution to a similar problem I came up with the following probable solution. 1)Enable logging to CSV format. 2)Then log all queries, insert, update, deletion statements for the given cluster. 3)Run your application for a period sufficient to have captured the execution of all possible functions and the deletes, inserts and updates to the tables and sequences. 4)Create a table in other database that conforms to the the field structure of CSV logging. 5)Populate this table with the contents of your CSV file. 6)Now (I think) all the database objects invoked or used in anyway (not sure about nested function calls though) will be listed in this table. So now you may query the appropriate field(s) to find out the "active" database objects. Since you are interested in weeding out the "inactive" objects and while obviously not loosing data or useful database objects I was thinking of the additional steps 7)After working hours stop your application(s) and somehow perform a database dump and restore (to other brand new database) on only these "active" database objects (maybe using --table=<activetable1> --table=<activetable..n>). Other database objects appearing in your "active database objects list" may require manual creation on your new database. 8)Rename the original database to other suitable label and rename the new database to the original name of the original database. 9)Start your application, perform some investigations to see if all is well. Allan. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general