Hello:
I created a table, and found the file created for that table is about 10 times of that I estimated!
The following is what I did:
postgres=# create table tst01(id integer);
CREATE TABLE
postgres=#
postgres=# select oid from pg_class where relname='tst01';
oid
-------
16384
(1 row)
Then I can see the file now:
[root@lex base]# ls ./12788/16384
./12788/16384
I heard that one integer type will use 4 bytes.
so I think that 2048 records with only one column of integer data type,
will use a little more than 8K(2048 records * 4 bytes/per integer data type + headers).
But in fact they use so much more:
After I run this:
postgres=# insert into tst01 values(generate_series(1,2048));
INSERT 0 2048
postgres=#
I can find the file 16384 is now 80KB!
[root@lex base]# ls -lrt ./12788/16384
-rw------- 1 postgres postgres 81920 May 28 11:54 ./12788/16384
[root@lex base]# ls -lrt -kb ./12788/16384
-rw------- 1 postgres postgres 80 May 28 11:54 ./12788/16384
[root@lex base]#
Then I tried again , I put another 2048 records:
postgres=# insert into tst01 values(generate_series(2049,4096));
INSERT 0 2048
postgres=#
And found that the file is now 152KB!
[root@lex base]# ls -lrt -kb ./12788/16384
-rw------- 1 postgres postgres 152 May 28 11:56 ./12788/16384
[root@lex base]#
Before this, I have thought that headers and other structure will just use a little space.
But what I found is about 10 times the space I evaluated.
So , Is there any method to correctly evaluate disk space one table will need,
given the table's column data types and , estimated record numbers ?