Partitioning will depend a lot on how you access the data. Is it always using that transaction_date column in the where clause? Can you share the table definition via \d? We've talked about this table quite a bit, but not actually seen what it looks like. We are flying blind a little bit. You mentioned your queries are slow, but can you share an explain analyze on one of these slow queries?
45 billion rows is quite manageable. How many rows are in each day? You may want to do a larger bucket than 24 hours per partition.
1)Even after partitioning the target table , to speed up the data load on this table , Is there an option to disable the primary and foreign keys and re-enable them post data load finishes. Will that be a good idea
No.
3)As the size of the table or each partition is going to be very large and this will be a read intensive application, compressing the historical partition will help us save the storage space and will also help the read queries performance
I am not sure what this means. If you are not going to need the data anymore, dump the data to deep storage and drop the partition.
Cheers,
Greg