-----Original Message----- >From: Mark Morgan Lloyd Sent: Thursday, June 30, 2016 2:41 AM >Neil Anderson wrote: >> On 2016-06-29 12:37 PM, Mark Morgan Lloyd wrote: >>> Elsewhere, somebody was asking how people implemented version control >>> for stored procedures on (MS) SQL Server. >>> >>> The consensus was that this is probably best managed by using scripts >>> or command files to generate stored procedures etc., but does anybody >>> have any comment on that from the POV of PostgreSQL? >>> > [etc.] Thanks everybody, summary passed on. >Mark Morgan Lloyd >markMLl .AT. telemetry.co .DOT. uk A bit late to the thread, but here's some specific details on how I've implemented version control in PG 9.5, in a small team environment deploying to single database servers in each tier (dev, qa, stage, prod). It's working well so far, and allows my stored proc versions to be aligned with the middle and upper tier code releases. I'm the lead database architect-engineer for a brand new genomics application (lots of data). Details: - we're using git for version control, with a base name for each repo that holds a single micro-service (like "JobManager") and a suffix for the data tier code ("JobManagerDBMS") making it simple for devops to find the related code for a micro-service deployment by repo. - within a DBMS repo, I've got subfolders like "scripts", "sprocs", "documentation", where scripts holds ad hoc scripts that need to be run during a deployment (might be adjusting DDL or seeding or cleaning up data), sprocs for stored function files that must be compiled into PG, and documentation holds notes, data models, etc. We have a simple python script that compiles/recompiles all stored proc files within a named folder - deployment done with one call. - I only code using source code files, by cloning an existing suitable base stored proc (akin to a template) to a new file name (like a "get" or "set" stored proc) and then revising to match the requirement. In a detailed comment block within each stored proc, I list a version number (just for reference, not used programmatically at this point), change history, author, comments, and one or more sample calls that form the basis of my unit tests. - after I've finished the requested work and the stored procs are working as expected, I update a Version file in the folder, push it into git and merge it into the development branch. - let's say a new enhancement request comes in. I create a new branch (like "RequestForNewThingy"), version that, do the work and merge it back in just like the above. So we've got isolation and persistence of changes. - I happen to be using the new DataGrip code editor, which supports this beautifully, since my git tree appears on the right side of editor window, allowing me directly edit/clone without leaving the editor. My coding efficiency using this model is quite high...the overhead of using git is trivial. For rollbacks, we can simply point to the prior stored proc version and recompile those. For DDL rollbacks, I have to code those scripts and supply them...this is the one place I have to spend a bit more time creating a more automated solution. I'd love to hear how other folks have solved programmatic rollbacks. Mike Sofen (San Diego, CA USA) -- Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general