Hi Balaji jayaraman,
Please find our response inline...
On Fri, Nov 3, 2017 at 8:11 AM, bala jayaram <balajayaram22@xxxxxxxxx> wrote:
Thank you for the response, I ran vacuum db with analyze in stages, it took 15 hours to complete. Also I noticed auto vacuum enabled for one of the huge database which is running in parallel to vacuum db . Is that the reason for running 15 hours ? Because we cannot wait for 15 hours of outage. What is the best way to address this ?
Though manual vacuum is progress, however autovacuum get precedence if it is kicked off and leaving manual vacuum behind.
to avoid time being disable autovacuum till the manual vacuum gets completed and later enable it.
And if we do pg_dump and restore to AWS RDS of 9.4 from 9.3 Linux native postgres, Analyze or vacuumdb is required ? We observed pg_dump and restore with -j parallel option also took more than 6 hours total,
pg_dump & pg_restore are logical which does not require Vacuum/Analyze, however down time is required.
What is the best way for moving into 9.4 RDS from 9.3 Linux based instance in quicker way ? Please suggest.
The best way is, due to higher DB size, you can go with slony option which also doesn't need vacuum/analyze having benefit of lower down time.
ThanksBalaji jayaramanOn Nov 2, 2017 5:27 PM, "Vasilis Ventirozos" <v.ventirozos@xxxxxxxxx> wrote:
> On 2 Nov 2017, at 23:03, bala jayaram <balajayaram22@xxxxxxxxx> wrote:
>
> Hi Team,
>
>
> We tried in production, pg_upgrade works well. But running vacuumdb , resulted in huge spike in CPU, system halted. Is there a way to fasten or parallel vacuum solution for faster recovery after pg_upgrade.
>
> Our database size is around 500GB, contains multiple databases, huge records. What is the minimum way to do a vacuuming after pg_upgrade? This is for migration from 9.3 to 9.4.
All you need to do right after the upgrade is getting new statistics by running "analyze" or by doing something like vacuumdb -a -v -z.
That should take a while but it shouldn't "halt" anything. I believe that 9.4 doesn't have -j in vacuumdb, so you can script
something that will will get all tables, split them and run each part in X number of psqls.
When you are done with the statistics then scheduling a vacuum would be a good idea. this can be done during any convenient
time or you can just split the work using a script.
Regards,
Vasilis Ventirozos
--
Sent via pgsql-admin mailing list (pgsql-admin@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-admin