On Fri, Dec 6, 2019 at 12:06 PM Zwettler Markus (OIZ) <Markus.Zwettler@xxxxxxxxxx> wrote:
> On Fri, Dec 6, 2019 at 10:50 AM Zwettler Markus (OIZ) <mailto:Markus.Zwettler@xxxxxxxxxx> wrote:
>> -----Ursprüngliche Nachricht-----
>> Von: Michael Paquier <mailto:michael@xxxxxxxxxxx>
>> Gesendet: Freitag, 6. Dezember 2019 02:43
>> An: Zwettler Markus (OIZ) <mailto:Markus.Zwettler@xxxxxxxxxx>
>> Cc: Stephen Frost <mailto:sfrost@xxxxxxxxxxx>; mailto:pgsql-general@xxxxxxxxxxxxxxxxxxxx
>> Betreff: Re: archiving question
>>
>> On Thu, Dec 05, 2019 at 03:04:55PM +0000, Zwettler Markus (OIZ) wrote:
>> > What do you mean hear?
>> >
>> > Afaik, Postgres runs the archive_command per log, means log by log by log.
>> >
>> > How should we parallelize this?
>>
>> You can, in theory, skip the archiving for a couple of segments and then do the
>> operation at once without the need to patch Postgres.
>> --
>> Michael
>
>
>Sorry, I am still confused.
>
>Do you mean I should move (mv * /backup_dir) the whole pg_xlog directory away and move it back (mv /backup_dir/* /pg_xlog) in case of recovery?
>
>No, *absolutely* not.
>
>What you can do is have archive_command copy things one by one to a local directory (still sequentially), and then you can have a separate process that sends these to the archive -- and *this* process can be parallelized.
>
>//Magnus
That has been my initial question.
Is there a way to tune this sequential archive_command log by log copy in case I have tons of logs within the pg_xlog directory?
It will be called one by one, there is no changing that. What you *do* with that command is up to you, so you can certainly tune that. But as soon as your command has returned PostgreSQL wil lhave the "right" to remove the file if it thinks it's time. But you could for example have a daemon that opens a file handle to the file in response to your archive command thereby preventing it from actually being removed, and then archives them in private, in which case the archiving only has to wait for it to acknowledge the process has started, not finished.
There's always a risk involved in returning from archive_command before the file is safely stored on a different machine/storage somewhere. The more async you make it the bigger that risk is, but it increases your ability to parallelize.