Search Postgresql Archives

Re: Backup PostgreSQL from RDS straight to S3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



s3fs available on linux allows mounting S3 directly as a local 
filesystem. At that point something like:

  pg_dump ... | gzip -9 -c > /mnt/s3-mount-point/$basename.pg_dump.gz;

will do the deed nicely. If your S3 volume is something like
your_name_here.com/pg_dump then you could parallize it by dumping
separate databases into URL's based on the date and database name:

    tstamp=$(date +%Y.%m.%d-%H.%M.%S);

    gzip='/bin/gzip -9 -v';
    dump='/opt/postgres/bin/pg_dump -blah -blah -blah';

    for i in your database list
    do
        echo "Dump: '$i'";
        $dump $i | $gzip > /mnt/pg-backups/$tstamp/$i.dump.gz &
    done

    # at this point however many databases are dumping...

    wait;

    echo "Goodnight.";

If you prefer to only keep a few database backups (e.g., a rolling
weekly history) then use the day-of-week for the tstamp; if you
want to keep fewer then $(( $(date +%s) / 86400 % $num_backups)) 
will do (leap-second notwhithstanding).

Check rates to see which AWS location is cheapest for the storage
and procesing to gzip the content. Also check the CPU charges for
zipping vs. storing the data -- it may be cheaper in the long run
to use "gzip --fast" with smaller, more repeatetive content than 
to pay the extra CPU charges for "gzip --best".

-- 
Steven Lembark                                        3646 Flora Place
Workhorse Computing                                St. Louis, MO 63110
lembark@xxxxxxxxxxx                                    +1 888 359 3508





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux