Hi: I need a sanity check (brainstorming) before I jump into coding something that might have a better solution known to a community like this one. Here's the situation...
To begin with... PG 11.5 on linux.
Some code (nature unimportant to this discussion) generates a lot (a lot) of data and stuffs it in a SQLite DB which, once the process is complete, gets zipped (compression ratio seems to be about 5). We want to keep this data in a persistent store which others can get at should they need it. PG seems like a great place to put this especially as the identifying context of the SQLite already exists in our PG DB.
So I was thinking about storing the zipped SQLite as a blob in PG. The record it would be added to would give it all the context needed for proper retrieval. After retrieval (in a perl script) I was thinking about writing it out to the shell, unzipping it and then opening it using perl/DBI. The metadata of the SQLite could be replicated in a temp table in PG and the data content loaded into that for use. In theory, multiple SQLite DBs could be opened like this and then loaded in the same temp tables (as long as the metadata is the same... which it will be).
OK, so that's the plan. What I want to ask this community about is whether or not there's a better way to approach this. Brute force loading the SQLite data in regular PG tables would result in billions of records and the DB management issues that come with that. And there's really no need to keep all that data immediately accessible like that. Is there some sort of innate PG functionality that would allow me to store and then compress my data for targeted retrieval/decompression (even if it's not accessible to SQL in that form) ?
OK, you get the picture. I'm all ears :-) And thanks in advance for any suggestions !