Re: size of indexes and tables (more than 1GB)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Chris,

> If you're running VACUUM often enough, then there's nothing wrong, and
> nothing to be done.  You're simply observing how PostgreSQL handles
> large tables.

Wrong. I have a big table - running VACUUM the first time needs as long as I run it after the VACUUM has finished. There are other problems with VACUUM, fixed in 8.1. In 8.1. you have a server internal AUTOVACUUM - setting this correct might be the solution.

My table has about 40GB of data with about 120 million tuples. Correct max_fsm settings, etc...

I created test datases with about 10-20 million tuples - and VACUUM runs fast, but not when you do many changes and your tables are mooooore bigger.

Chris Browne wrote:
jafn82@xxxxxxxxx (jose fuenmayor) writes:

I read and have seen that when a table has more than 1GB it is divided
in several files with the names of inode,inode.1,inode.2,inode.3, etc.

I have a table of 1.3 GB (9.618.118 rows,13 fields) it is divided in
that way  as i see on /PGDATA/base but each file has the same size i
mean
table inode (1.3GB), inode.1(1.3GB),inode.2(1.3GB) so is  this not a
waste of space?, are those file sizes reusable by postgresql?.

The size of the table is 3 times bigger than,  for instance Visual Fox
Pro dbf's? since is there fisically three times.


Having "file", "file.1", "file.2", and such is routine; that is the
normal handling of tables that grow beyond 1GB in size.  If there is
actually 3GB of data to store in the table, then there is nothing to
be 'fixed' about this.  There is no duplication of data; each of those
files contains distinct sets of tuples.

First question...

Are you vacuuming the table frequently to reclaim dead space?

If that table is heavily updated (e.g. - via DELETE/UPDATE; mere
INSERTs do NOT represent "updates" in this context), then maybe
there's a lot of dead space, and running VACUUM would cut down on the
size.

If you're running VACUUM often enough, then there's nothing wrong, and
nothing to be done.  You're simply observing how PostgreSQL handles
large tables.


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux