On 26/04/21, Marc Millas (marc.millas@xxxxxxxxxx) wrote: > compression ? > > I am currently working on a project to move an oracle db to postgres. > The db is 15 TB. > with Oracle compression it does use 5 TB of disk space. > > If we cannot compress the whole thing, the project loses its economic base. > (added 10 TB for prod, 10TB for pre-prod, 10TB for testing dev, ...) > > we do test zfs, and we will give a try to btrfs. I've been using btrfs with lzo compression for several years on my personal laptop and some non-critical backup systems with no trouble. (In fact btrfs has helped us recover from some disk failures really well.) While I run postgresql on my machine it is for light testing purposes so I wouldn't want to comment on its suitability for production. There are some differences reported here between lzo and zlib compression performance for Postgresql: https://sudonull.com/post/96976-PostgreSQL-and-btrfs-elephant-on-an-oil-diet zstd compression support for btrfs is reported on by Phoronix here: https://www.phoronix.com/scan.php?page=article&item=btrfs-zstd-compress&num=2 The compression page of the btrfs wiki is here: https://btrfs.wiki.kernel.org/index.php/Compression You might want to armor yourself for possible problems by reading the Debian btrfs wiki page: https://wiki.debian.org/Btrfs If you test your workload please let us know your results. Rory