No. Just no. I is fine to make stored procedure changes in a development environment and deploy them as part of a release. Typically you would want some sort of change tracking software like alembic or squitch or something like that. Production databases typically contain a huge amount of data or data that you really shouldn't share.
Part of the release is to deploy changes to production systems. Often you will want to back up those systems before a release in case you have to roll back or just make small changes that you can revert. I would say that release procedures for database deployment is well beyond the scope of this list and every company I have worked for has different procedures.
Basically, the OP wanted to replicate a process that sort of works in MsSQL kind of, maybe. The set of steps outlined will dump all of the tables and restore them. If this is their process, I highly question that process but those steps are correct. I would point out that eventually that system will break down, is highly dependant on individuals knowing lot of steps, possibly exposes data to people who shouldn't have it, is overly complicated, probably isn't best practices for releases, and is error prone (someone makes a change that no one else knows about and it breaks a webpage).
So is this the best? In my opinion, probably not. Will it work in the way that the OP wanted it to work? Yes. I simply wouldn't manage a process like this but if that is the process that the OP is comfortable with and a lot of people agreed to, it will work. Sometimes it is easier to simply replicate the existing bad process that a team agrees to rather than making a better process.
Thanks,
Ben
On Fri, Jan 22, 2021, 9:21 AM Rory Campbell-Lange <rory@xxxxxxxxxxxxxxxxxx> wrote:
On 22/01/21, Benedict Holland (benedict.m.holland@xxxxxxxxx) wrote:
> I mean... the best? You just laid out a bunch of steps to define a process.
>
> > On Thu, Jan 21, 2021 at 11:12 PM sivapostgres@xxxxxxxxx <
> > sivapostgres@xxxxxxxxx> wrote:
> >
> >> So the solution to the issue will be
> >>
> >> 1. Backup a DB using PGDUMP from 1st server.
> >> 2. Restore the DB in 2nd server.
> >> 3. Make required changes in the 2nd server.
> >> 4. Backup that DB using PGDUMP from 2nd server.
> >> 5. Delete / Rename that DB in the 1st server
> >> 6. Restore that DB in the 1st server.
> >> 7. Work again in the 1st server.
> >>
> >> Is this the best way to carry out this process?
Rather late to the party, and I expect this has been mentioned already,
but presumably changes to any database are either to do with the data or
to do with aspects such as the pl functions.
Data transformations can be tested in production and testing in
transactions, so that the logic of the transformation can be captured in
a set of SQL statements which can be applied to either environment
through a (possibly automated) revision control system.
Data insertions are much more conveniently only done on the production
database, as converging data between different databases can be tricky.
However if your data is conveniently added in bulk without the risk of
duplication, a revision control approach could also work.
Finally working on pl functions and similar changes we do out of
revision control. In other words, "if it works in testing we apply it to
production". Of the the three only this is idempotent.
In any event, perhaps the following could work?
1. Backup the DB on the 1st server using pg_dump
2. Restore the DB on the 2nd server
3. Record changes on the 2nd server as revision controlled statements
4. Replay changes on the 1st server using the revision controlled
statements
Regards
Rory