I created a table in 8.1.5 on Linux with three columns;
date, bigint and integer.
Then I populated the table with more than 2 million rows.
I looked at the size of the file that contained the table and divided
this by the number of rows which gave an average of just over 60 bytes
per row.
This seems to be quite a large overhead as I would guess that the date
field takes around 4 bytes, the bigint 8 bytes and the integer 4 bytes.
I would have hoped to have had an average size of between 20 and 30
bytes per row. Is this normal and is there any way of improving this as
I'm hoping to use have around 80 million rows in a table without it
taking up too much disk space and too much memory to cache it?
Regards
Joe