Hi,
As many have surmised from my previous messages, I need to create a set of schemas,
where two schemas as smallish and distincts, with an arbitrary number of additional schemas (of a 3rd kind),
which is "largish" (244 tables, 4'000+ columns, 1'300+ relations).
That's why I say / write that I have a system of 2+N schemas.
All 3 schemas types are created from code, which issues a bunch of DDL statements via libpq.
Creating the first two "fixed" schemas is fast enough. 300ms, including populating the 2nd with a bunch of rows.
But creating the 3rd kind, which will be done many times, is not so fast, at 2.5 - 3s, which may seem fast,
but when you do it dozens of time in unit-tests, adds up quickly to slow running tests...
My question is what could be done to speed up the creation of that 3rd type of schema?
Right now, that implies quite a few round-trips to the server, about 1'100, but on the LAN with sub-ms latency;
And IO wise, totals less than 1MB (since mostly DDLs):
EXEC: 1,125x ( 0 rows, 445,373 bytes) in 1.810s ( 0.2 MB/s)
RSET: 1,187x ( 14 rows, 263,480 bytes) in 0.000s ( 0.0 MB/s)
(1.8s is spent in libpq itself, the rest to reach to 2.5s - 3s are overhead in our own code, to generate the schema).
One idea I had, similar to the template-DB feature, would be to create 1 largish read-only schema at installation time
(i.e. not create only 2 but the 3 kinds of schemas), and have the ability to clone this 3rd (largish) kind entirely server-side.
So if there a way to close a schema easily in PostgreSQL?
Do note our triggers use the set search_path to avoid relying on the session's search_path, to make sure we call "our" functions (in schema).
(we have different client code-bases, some set the search-path, others don't, and that's the only way we found to have the trigger function correctly).
This means the template-schema name is part of the DDL for the schema, and a clone would need to use its own search-path, not the original.
At this point, the schema is "standalone", it does not have foreign-keys outside itself, nor does it use functions/procedures from the other 2 schemas.
But in the future it could. And right now, the schema does use functions from extensions (e.g. uuid-ossp, lo, ...). In all cases, the clone should continue
referencing external objects as-is.
If the "SCHEMA template" idea is difficult to pull of, has anyone experience on the gains from issuing the CREATE TABLE DDLs as part of the CREATE SCHEMA one?
That would result in fewer round-trips, but I somehow doubt it would gain much (again, because on the LAN with sub-ms latencies).
Any other tips on how to clone a schema? Or speed up creation of a schema in general?
Please do note this needs to be done from a libpq client, which has no direct access to the server (i.e. cannot use dump/restore solutions).
Thanks for any insights on the above. --DD