On 11/20/20 4:32 PM, Mark Johnson
wrote:
This all sounds like a previous discussion on pg hackers about a progress meter for pg_dump. Search the archives for that discussion. Also, search the web for something like "pg_dump progress meter" and you'll get a few suggestions like pipe to pv, although that does not appear to work with all of the file formats supported by pg_dump.
What do you see in pg_stat_activity? It's been a while since I tried to monitor a running pg_dump.
Also, if you redirect the output to a file then doesn't the file's timestamp get updated each time something happens. (That's what I used to do with Oracle before they added timestamps).
On Fri, Nov 20, 2020 at 5:00 PM Ron <ronljohnsonjr@xxxxxxxxx> wrote:
On 11/20/20 3:39 PM, Adrian Klaver wrote:
> On 11/20/20 1:00 PM, Ron wrote:
>> On 11/20/20 2:56 PM, Adrian Klaver wrote:
>>> On 11/20/20 10:01 AM, Durumdara wrote:
>>>> Hello!
>>>>
>>>> We need to log the pg_dump's state.
>>>> What objects are in copy, and what are the starting and ending times.
>>>>
>>>> But when I try to redirect the output, the result doesn't have timestamps.
>>>>
>>>> PG 11, on Windows.
>>>>
>>>> As I see the -v option isn't enough to see the starting times.
>>>>
>>>> For example:
>>>>
>>>> 2020-11-19 12:00:01.084 Dump table content table1
>>>> 2020-11-19 12:03:12.932 Dump table content table2
>>>> ...
>>>> etc.
>>>
>>>
>>> If you are redirecting to a file it have the creation time that you can
>>> use. Internally times don't really matter for the objects as the dump is
>>> based on a snapshot. Said snapshot is based on visible transactions not
>>> time. So for practical purposes they all occur at the same 'time'.
>>
>> It makes all the difference when monitoring the progress of a backup.
>>
>
> With -v you will get running list of objects dumped, just not the time.
> The time is only of value relative to the following. Progress will only be
> measurable by determining what is left to run and the time for each
> object. Not sure that is feasible as you would have to pre-run the dump to
> get information about the number of objects and an estimate of the data
> quantity involved and the effect of each on the other. I could see that
> estimate getting worse the bigger the data set(and hence the more you
> cared) got. Because at some point the load on the machine would affect the
> output speed of the dump.
By knowing the sizes of the tables, and how long it takes to takes the first
"some" tables, then one can forecast how long it takes to backup the whole
database.
--
Angular momentum makes the world go 'round.
--
Angular momentum makes the world go 'round.
Angular momentum makes the world go 'round.