On Sat, 2011-07-09 at 10:43 +0200, Gael Le Mignot wrote: > Hello Guillaume! > > Sat, 09 Jul 2011 10:33:03 +0200, you wrote: > > > Hi, > > On Sat, 2011-07-09 at 09:25 +0200, Gael Le Mignot wrote: > >> [...] > >> We are running a PostgreSQL 8.4 database, with two tables containing a > >> lot (> 1 million) moderatly small rows. It contains some btree indexes, > >> and one of the two tables contains a gin full-text index. > >> > >> We noticed that the autovacuum process tend to use a lot of memory, > >> bumping the postgres process near 1Gb while it's running. > >> > > > Well, it could be its own memory (see maintenance_work_mem), or shared > > memory. So, it's hard to say if it's really an issue or not. > > > BTW, how much memory do you have on this server? what values are used > > for shared_buffers and maintenance_work_mem? > > maintenance_work_mem is at 16Mb, shared_buffers at 24Mb. > IOW, default values. > The server currently has 2Gb, we'll add more to it (it's a VM), but we > would like to be able to make an estimate on how much memory it'll need > for a given rate of INSERT into the table, so we can estimate future > costs. > > >> I looked in the documentations, but I didn't find the information : do > >> you know how to estimate the memory required for the autovacuum if we > >> increase the number of rows ? Is it linear ? Logarithmic ? > >> > > > It should use up to maintenance_work_mem. Depends on how much memory you > > set on this parameter. > > So, it shouldn't depend on data size ? Nope, it shouldn't. > Is there a fixed multiplicative > factor between maintenance_work_mem and the memory actually used ? > 1 :) > >> Also, is there a way to reduce that memory usage ? > > > Reduce maintenance_work_mem. Of course, if you do that, VACUUM could > > take a lot longer to execute. > > >> Would running the autovacuum more frequently lower its memory usage ? > >> > > > Yes. > > Thanks, we'll try that. > I don't quite understand how you can get up to 1GB used by your process. According to your configuration, and unless I'm wrong, it shouldn't take more than 40MB. Perhaps a bit more, but not 1GB. So, how did you find this number? -- Guillaume http://blog.guillaume.lelarge.info http://www.dalibo.com -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance