On 2024-02-14 11:25:48 +0100, Daniel Gustafsson wrote: > > On 14 Feb 2024, at 10:59, Simon Connah <simon.n.connah@xxxxxxxxxxxxxx> wrote: > > > This is probably a stupid question so I apologies in advance. > > There is no such thing. > > > What I think is the best way to do this is to do a pg_dump of the > > database (using the --schema-only flag) and then load it into a test > > only database that gets created at the start of the unit tests and > > destroyed at the end. The automated tests will insert, update, > > delete and select data to test if it all still works. > > If the source of truth for your schema is the database, then sure. If the > source of truth is a .sql file in your source code repository then you should > use that. I sort of do both for one of my projects: I originally created the SQL script by running pg_dump on a manually constructed test database (I do have code to create the empty schema, but I had to insert the test data manually). That script went into the git repo. The test files all contain a fixture which drops and recreates the test database using that sql file. When I add a new test case I try to make do with the existing test data. When I need additional data for a new test case, I create a new pristine test database using the sql file, add the new data, and then create a new sql file using sql_dump which is then committed with the test cases. Same for migrations: If I need to migrate the schema, I run the migration on the test database, then dump and commit it. This project is small enough (86 tests in 10 files) that all test cases can use the same test data. However, I could easily use different test data for different tests. hp -- _ | Peter J. Holzer | Story must make more sense than reality. |_|_) | | | | | hjp@xxxxxx | -- Charles Stross, "Creative writing __/ | http://www.hjp.at/ | challenge!"
Attachment:
signature.asc
Description: PGP signature