Search Postgresql Archives

AW: archiving question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Fri, Dec 6, 2019 at 10:50 AM Zwettler Markus (OIZ) <mailto:Markus.Zwettler@xxxxxxxxxx> wrote:
>> -----Ursprüngliche Nachricht-----
>> Von: Michael Paquier <mailto:michael@xxxxxxxxxxx>
>> Gesendet: Freitag, 6. Dezember 2019 02:43
>> An: Zwettler Markus (OIZ) <mailto:Markus.Zwettler@xxxxxxxxxx>
>> Cc: Stephen Frost <mailto:sfrost@xxxxxxxxxxx>; mailto:pgsql-general@xxxxxxxxxxxxxxxxxxxx
>> Betreff: Re: archiving question
>> 
>> On Thu, Dec 05, 2019 at 03:04:55PM +0000, Zwettler Markus (OIZ) wrote:
>> > What do you mean hear?
>> >
>> > Afaik, Postgres runs the archive_command per log, means log by log by log.
>> >
>> > How should we parallelize this?
>> 
>> You can, in theory, skip the archiving for a couple of segments and then do the
>> operation at once without the need to patch Postgres.
>> --
>> Michael
>
>
>Sorry, I am still confused.
>
>Do you mean I should move (mv * /backup_dir) the whole pg_xlog directory away and move it back (mv /backup_dir/* /pg_xlog) in case of recovery?
>
>No, *absolutely* not.
>
>What you can do is have archive_command copy things one by one to a local directory (still sequentially), and then you can have a separate process that sends these to the archive -- and *this* process can be parallelized. 
>
>//Magnus
 


That has been my initial question.

Is there a way to tune this sequential archive_command log by log copy in case I have tons of logs within the pg_xlog directory?

Markus





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux