Search Postgresql Archives

Re: Database growing. Need autovacuum help.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




3 jun 2008 kl. 15.23 skrev Bill Moran:

In response to Henrik <henke@xxxxxx>:

We are running a couple of 8.3.1 servers and the are growing a lot.

I have the standard autovacuum settings from the 8.3.1 installation
and we are inserting about 2-3 million rows every night and cleaning
out just as many every day.

Is this a batch job?  If so, autovac might not be your best friend
here.  There _are_ still some cases where autovac isn't the best
choice.  If you're doing a big batch job that deletes or updates a
bunch of rows, you'll probably be better off making a manual vacuum
the last step of that batch job.  Remember that you can vacuum
individual tables.

Well, sort of. We have different jobs that usually runs at night filling the database with document information. After that is dont we have maintenance jobs that clean out old versions of those documents. Maybe autovacuum is not for us on at least this table. I know that it is an specific table that has most bloat.



The database size rose to 80GB but after a dump/restore its only 16GB
which shows that there where nearly 65GB bloat in the database.

Does it keep growing beyond 80G? While 65G may seem like a lot of bloat, it may be what your workload needs as working space. I mean, you _are_
talking about shifting around 2-3 million rows/day.

Crank up the logging.  I believe the autovac on 8.3 can be configured
to log exactly what tables it operates on ... which should help you
determine if it's not configured aggressively enough.

I will do that. But I already which table is the bad boy in this case. :)

If it's just a single table that's bloating, a VACUUM FULL or CLUSTER
of that table alone on a regular schedule might take care of things.
If your data is of a FIFO nature, you could benefit from the old trick
of having two tables and switching between them on a schedule in order
to truncate the one with stale data in it.

It is somewhat FIFO but I can't guarantee it...

I will look at CLUSTER and see.

Maybe de design is flawed :) To put it simple we have a document storing system and the 3 major table is tbl_folder, tbl_file and the many-to-many table tbl_file_folder.

In the tbl_file we only have unique documents.
But a file can be stored in many folders and a folder can have many files so we have the tbl_file_folder with fk_file_id and fk_folder_id.

To be able to handle versions we always insert new folders even though nothing has changed but it seemd like the best way to do it.

E.g

First run:
	tbl_file 500k new files.
	tbl_folder 50k new rows.
	tbl_file_folder 550k new rows.

Second run with no new files.
	tbl_file unchanged.
	tbl_folder 50k new rows
	tbl_file_folder 550k new rows.


The beauty with this is that it is very effective to retrieve the exact file/folder structure at a given point in time but the drawback is that it is a lot of overhead in the database.

Maybe someone has some kool new idea about this. :)


Thanks Bill!

Cheers,
henke








Hope some of these ideas help.

--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

wmoran@xxxxxxxxxxxxxxxxxxxxxxx
Phone: 412-422-3463x4023

--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux