Hi Matt, On Thu, Jul 18, 2019 at 09:44:04AM -0400, Matthew Pounsett wrote: > I've recently inherited a database that is dangerously close to outgrowing > the available storage on its existing hardware. I'm looking for (pointers > to) advice on scaling the storage in a financially constrained > not-for-profit. Have you considered using the VDO compression for tables that are less update intensive. Using just compression you can get almost 4X size reduction. For a database, I would forgo the deduplication function. You can then use a non-compressed tablespace for the heavier I/O tables and indexes. > > One of my anticipated requirements for any replacement we design is that I > should be able to do upgrades of Postgres for up to five years without > needing major upgrades to the hardware. My understanding of the standard > upgrade process is that this requires that the data directory be smaller > than the free storage (so that there is room to hold two copies of the data > directory simultaneously). I haven't got detailed growth statistics yet, > but given that the DB has grown to 23TB in 5 years, I should assume that it > could double in the next five years, requiring 100TB of available storage > to be able to do updates. > The link option with pg_upgrade does not require 2X the space, since it uses hard links instead of copying the files to the new cluster. Regards, Ken