Search Postgresql Archives

Re: Moving several databases into one database with several schemas

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





2012/9/7 Merlin Moncure <mmoncure@xxxxxxxxx>
On Thu, Sep 6, 2012 at 10:12 PM, Edson Richter <edsonrichter@xxxxxxxxxxx> wrote:
> Em 06/09/2012 15:40, John R Pierce escreveu:
>
>> On 09/06/12 5:30 AM, Edson Richter wrote:
>>>>
>>>> You could change the default setting for the user with
>>>>
>>>> ALTER ROLE someuser SET search_path=...
>>>
>>> That is perfect! I can have separate users for each application, and then
>>> they will have the correct search path.
>>> You saved my day,
>>
>>
>> the default search_path is $USER,public, so by naming your schema's to the
>> usernames, you don't even need to alter role...
>>
> Wonderful, this would have the effect I expect that the connection defines
> the path. Then I'll use user to select the specific schema, and the "public"
> schema as the main schema.
>
> Thanks to you all, I think I have everything needed to put my migration
> project in practice.

I do this exact thing frequently.  I route everything through dumps.
Here's some roughed out bash script for ya.. The basic MO is to
restore hack the restore script with sed, restoring to a scratch
schema so that the drop/reload of the client private schema can be
deferred until the data is already loaded.

function load_client {

  client=$1
  database=master_db

  echo "[`date`] Loading $client "

  psql -c "update client set load_started = now(), LoadedPO = NULL
where name = '$client';" $database
  <get backup database and place into $client.current.gz>
  psql -c "drop schema if exists ${client}_work cascade" $database
2>&1 | grep ERROR
  psql -c "create schema ${client}_work" $database 2>&1 | grep ERROR
  gzip -cd $client.backup.gz | sed "s/^SET search_path = public/SET
search_path=${client}_work/" | psql -XAq $database 2>&1 | grep ERROR |
grep -v "plpgsql"
  psql -c "begin; drop schema if exists ${client} cascade; alter
schema ${client}_work rename to $client; commit;" $database
  psql -c "update client set load_finished = now() where name =
'$client';" $database
  rm -f $client.current.gz
}

To cut restore time down I run them in parallel:

NUM_FORKS=4

function do_parallel {
  while [ `jobs | wc -l` -ge $NUM_FORKS ]
  do
    sleep 1
  done

  "$@" &
}

Then it's just a matter of:
<get $clients somehow>
for client in $clients
do
  do_parallel load_client $client
done
Great stuff, Merlin! ;-)

--
// Dmitriy.



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux