Vincent Dautremont <vincent@xxxxxxxxxxxxxxxx> writes: > I think that i'm using the database for pretty basic stuffs. > It's mostly used with stored procedures to update/ insert / select a row of > each table. > On 3 tables (less than 10 rows each), clients does updates/select at 2Hz to > have pseudo real-time data up to date. > I've got a total of 6 clients to the DB, they all access DB using stored > procedures > I would say that this is a light usage of the DB. > Then I have rubyrep 1.2.0 running to sync a backup of the DB. > it syncs 8 tables : 7 of them doesn't really change often and 1 is one of > the pseudo real-time data one. This is not much information. What I suspect is happening is that you're using plpgsql functions (or some other PL) in such a way that the system is leaking cached plans for the functions' queries; but there is far from enough evidence here to prove or disprove that, let alone debug the problem if that is a correct guess. An entirely blue-sky guess as to what your code might be doing to trigger such a problem is if you were constantly replacing the same function's definition via CREATE OR REPLACE FUNCTION. But that could be totally wrong, too. Can you put together a self-contained test case that triggers similar growth in the server process size? regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin