I have a 400GB joining table (one SMALLINT and the other INTEGER -
Primary Keys on other tables) with 1000 fields on one side and 10M on
the other, so 10,000M (or 10Bn) records all told.
My queries:
Do you have any explain analyze,buffers
<query> results with the existing setup? Does it look problematic?
How would your table grow on either side of the join ? Append only, static data or too frequently updated etc, or dropped periodically, so that delete based bloating can be skipped completely.
How distributed is the data based on smallint keys, equally or unequally.
What kind of queries would be run and results returned ? Oltp or olap like ? Quick queries with few rows retuned or heavy queries with lot of rows returned.
Partitioning has been ever improving, so the best option if possible would be to use the latest pg version is possible,.
Also is there any scope of normalisation of that table, I mean I know theoretically it is possible, but I have not seen any design with that wide table( of 1000 cols), so would be good to know.
Just asking, maybe partitioning would the best option but wanting to know/see the benefit pre and post partitioning.
Thanks,
Vijay
Mumbai, India