> -----Original Message----- > From: pgsql-general-owner@xxxxxxxxxxxxxx [mailto:pgsql-general- > owner@xxxxxxxxxxxxxx] On Behalf Of elliott > Sent: Monday, August 20, 2012 1:54 PM > To: pgsql-general@xxxxxxxxxxxxxx > Subject: Database Bloat > > Hi, > > I am using PostgreSQL 9.1 and loading very large tables ( 13 million rows each > ). The flat file size is only 25M. However, the equivalent database table is > 548MB. This is without any indexes applied and auto vacuum turned on. I > have read that the bloat can be around 5 times greater for tables than flat > files so over 20 times seems quite excessive. > > Any ideas on how to go about decreasing this bloat or is this not unexpected > for such large tables? > > Thanks > Kinda guessing here but that 5x estimate has some assumptions built in. I am guessing that a table that has many large plain-text data would compress more than one with a mixture of numbers, varchars, dates and other less-compressible datatypes. It would help to provide a general description of the structure of said tables and how large individual fields (if bytea or text) tend to be. I would think that filesystem parameters come into play as well and you do not specify the OS on which you are running. Do you have any idea which specific tables are "bloated"? If there are only a few main contributors what is different about them? More questions than answers but something to ponder while you wait for more knowledgeable persons to respond. David J. -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general