Craig Ringer wrote:
p.s.: I am not as much concerned with the size that it'd take on the data
tables, as much as the fact that the select / insert performances
shouldn't
suffer. However, if that increase in size (per data record) may make a
considerable impact on the performance, I would certainly want to take
that
into account during design phase.
I suspect it's just another tradeoff - table size increase (and thus
scan performance cost) from storing the text vs avoiding the need to
access the lookup table for most operations.
Size can affect performance as much as anything else. In your case of
limited rows it will make little difference, though the larger table
with millions of rows will have this key entered for each row and be
indexed as the foreign key.
The real question is how you want to use the column, if you wish to
query for rows of a certain currency then you will notice the difference.
You could use a smallint of 2 bytes each (or a varchar(1) with an int
value instead of a real char) or an integer of 4 bytes, compared to your
varchar(10)
Data size on the column could be less than half of the size of the
varchar(10) so there will be less disk reads (the biggest slow down) and
smaller indexes which can increase chances of caching.
Without storage overheads each million rows will have 10*1000000=10M
bytes of data compared to 4*1000000=4M bytes - you can see that the
chance of caching and the time to read off disk will come into effect
each time you reference that column.
--
Shane Ambler
pgSQL (at) Sheeky (dot) Biz
Get Sheeky @ http://Sheeky.Biz