On 8/5/19 7:31 AM, Kenneth Marshall wrote:
On Mon, Aug 05, 2019 at 12:00:14PM +0530, Shital A wrote:
Hello,
Need inputs on below:
We are working on a setting up a new highly transactional (tps 100k) OLTP
system for payments using blockchain and postgresql 9.6 as DB on Rhel 7.6.
Postgres version is 9.6 and not latest because of specs of blockchain
component.
There is a requirement for data compression on DB level. Please provide
your inputs on how this can be best achieved.
Checked in-build Toast, it compressed the data provided exceed the 2kb
pagesize? If the data values are small and even if there are billion
records they wont be compressed, this is what I understood.
Are there any suggestions of compressing older data irrespective of row
size transparently?
Thanks.
Hi,
On RHEL/Centos you can use VDO filesystem compression to make an archive
tablespace to use for older data. That will compress everything.
Doesn't this imply that either his table is partitioned or he regularly
moves records from the main table to the archive table?
--
Angular momentum makes the world go 'round.