Re: Big difference in databasesize compared with disksize

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



As result of no response i will try again and add some more information for you.

The given problem is the same. I have many DB's which have for unknown reason, differences in Filesizes, if you check them with "du" or "ls", in their Filenodes. To get a little deeper, I have around 300 Postgres 9.0 databases. The databases which get affected of this issue got all replicated via streaming replication to new hardware, they are all insside a vm. The new hardware is the same like the old, but differs with Kernel.
Old: 2.6.39 New: 3.1.4

One Example:
###
/var/lib/postgresql/9.0/main/base/43169# ls -lh 64121
-rw------- 1 postgres postgres 58M 2012-02-16 17:03 64121

/var/lib/postgresql/9.0/main/base/43169# du -sh 64121
89M    64121
###
So this file "64121" has a difference of 31MB.

To get some informations of this file I played a little bit with XFS tools.

###
/var/lib/postgresql/9.0/main/base/43169# xfs_bmap  64121
64121:
    0: [0..116991]: 17328672..17445663

/var/lib/postgresql/9.0/main/base/43169# xfs_fsr -v 64121
64121
64121 already fully defragmented.

/var/lib/postgresql/9.0/main/base/43169# xfs_info /dev/xvda1
meta-data=/dev/root              isize=256    agcount=4, agsize=959932 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=3839727, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

/var/lib/postgresql/9.0/main/base/43169# cat /proc/mounts
rootfs / rootfs rw 0 0
/dev/root / xfs rw,noatime,nodiratime,attr2,delaylog,nobarrier,noquota 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,relatime,mode=755 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,relatime 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
###

Strange, or not? Regarding this informations, the file is contiguous on disk and has of course no fragmentation, so why is it showing so much diskusage?

The relation this filenode is belonging to, is an index, and regarding my last overview it seems that this happens for 95% only to indexes/pkeys.

Well you could think i have some strange config settings, but we distribute this config via puppet, and also the servers on old hardware have this config. so things like fillfactor couldn't explain this.

We also thought that there could be some filehandles still exist. So we decided to reboot. Wow, we thought we got it, the free diskspace increased slowly for a while. But then, after 1-2GB captured diskspace it went back to normal and the filenodes grew again. This doesn't explain it as well. :/

One more thing, a xfs_fsr /dev/xvda1 recaptures also some diskspace, but with same effect as a reboot.


For now the only things which are really different are the kernelversions. // i now tried to reboot them with the 2.6.39 kernel, but getting the same behavior like with the 3.1.4 kernel.

If you have _any_ idea about what could cause this behavior, please tell me. i'm getting out of ideas.
Thanks a lot.

regards
Bernhard


On 02/14/2012 02:22 PM, Bernhard Schrader wrote:
Hi all,

Problem:
If I use
SELECT pg_size_pretty(pg_database_size('dbname')) As fulldbsize;

fulldbsize
------------
6455 MB
(1 row)

If I compare this output with
/var/lib/postgresql/9.0/main/base/[DBFOLDER]
I have a difference of 3 GB, which I think is not normal.

To make a little bit clearer what i mean:

/var/lib/postgresql/9.0/main/base/43169# du --apparent-size -h
6.4G .

/var/lib/postgresql/9.0/main/base/43169# du -sh
9.4G .

Is such a big fragmentation a normal behavior? It's splitted to several
Filenodes.

I also made a "xfs_fsr" to defrag all of this, i gathered around 1 GB
back ( would be more if the tables wouldn't be written in this time )
But within some hours the fragmentation was back..

There is also no big dead tuple count in this database, it's getting a
vacuum analyze every night.

Right now i am out of ideas... What do you think i could do?
If you need more information i can provide.

Environment:
- OS: Debian Lenny
- Disksize: 15GB
- Filesystem: XFS
- Psql: 9.0.3

thanks in advance.

regards
Bernhard



--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux