Hi,
I've got a database with about 155GB of binary data, however when I run the unix utility df, it reports only 60GB of disk space is being used. I've extracted random samples of data from the database, and it all appears correct, so I presume it's not corrupt. Can anyone tell me whether there's some sort of disk compression happening with large objects?
Has anyone else seen this behaviour?
# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup01-ATV_PGSQL
480678616 59285468 396976076 13% /var/lib/pgsql
This is Postresql version 8.1.4 on a slightly modified version of RHEL 4 (2.6.17 kernel).
Thanks,
Geoff.
I've got a database with about 155GB of binary data, however when I run the unix utility df, it reports only 60GB of disk space is being used. I've extracted random samples of data from the database, and it all appears correct, so I presume it's not corrupt. Can anyone tell me whether there's some sort of disk compression happening with large objects?
Has anyone else seen this behaviour?
# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup01-ATV_PGSQL
480678616 59285468 396976076 13% /var/lib/pgsql
This is Postresql version 8.1.4 on a slightly modified version of RHEL 4 (2.6.17 kernel).
Thanks,
Geoff.