I'm brainstorming ideas on how to design database schemas that share a common subset of objects and stumbled on some issues. Maybe someone has experience with such a design. Any ideas are welcome! Use case: My team wants a "template" database from which we can create databases for new customer projects. Customer projects may have additional objects building on top of the common objects provided by the template. Think of the common objects as our core library. Each customer project deals with geographic objects in the same business domain and we already built prototypes for a few projects using PostGIS. The core library will consist of tables, views, etc. that model entities from the business domain. Each project should have a dedicated database to isolate data and avoid accidental mixing. Although I think it should be possible to handle all projects in a single database by extending primary keys to include the project ID and using table partitioning. But there's always the risk of mixing different projects due to errors in WHERE clauses or JOIN conditions. Of course this should be covered with tests, but shit happens. Customers will not access the databases directly. Management just wants to keep the data as isolated as possible. My ideas: I already ruled out Postgres' template databases because it just copies the template database and there's no mechanism for updating the template copies besides individually managing each copy's schema as if the copy was created from scratch. This led me to extensions because they are versioned and different versions can be installed per database so each customer project can evolve at a different pace. The extension for our common objects would also include tables and I already verified that it's possible to have those dumped by pg_dump [1] for our backups. And Postgres also ensures the integrity of extensions by preventing the removal of objects created by extensions. Issue with extensions: At this point I noticed that pg_dump emits CREATE EXTENSION without the version. Why is that? I assumed that pg_dump creates exact snapshots, especially for the purpose of backups. The documentation of pg_dump does not say anything about that so I checked its source code and found that the version is omitted in favour of the default extension version on the destination installation [2]. I can the see the risk of a specific extension version not being available on a different installation but that would simply cause the restore to fail on a non-existent extension version. This can be fixed by providing the required extension versions on the destination installation. I guess the intended solution for this is to create the database and install the expected extension version before restoring the dump. That's why pg_dump emits CREATE EXTENSION IF EXISTS. But this requires manual intervention instead of taking the dump as is. Or recreate the schema from version control and restore only the data. Because of this issue I'm hesitant going with an extension for this use case, especially once we have versions with backward-incompatible changes. Instead I'm thinking about something similar to the update scripts we use for extensions but checking them into version control for each customer project. Erik [1] https://www.postgresql.org/docs/14/extend-extensions.html#EXTEND-EXTENSIONS-CONFIG-TABLES [2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/bin/pg_dump/pg_dump.c;h=a2dc42e278b0aca7d8074ebe66670b56099bab4d;hb=86a4dc1e6f29d1992a2afa3fac1a0b0a6e84568c#l10473