On 10/9/21 1:20 PM, Wells Oliver wrote:
Thanks all. Reducing maintenance_work_mem to 1GB and keeping shared_buffers at 4GB did allow the restore to complete. It took ~770m with 16 processes and that configuration, but only ~500m with maintenance_work_mem at 2GB, shared_buffers at 4GB, and 8 processes. Of course, with maintenance_work_mem at 2GB and 16 processes, we ran out of memory and kaboom.
If anyone has any more parameter values I should try to improve on that restore time, I'd love to hear them.
Thanks for the help here.
On Fri, Oct 8, 2021 at 3:48 PM Alvaro Herrera <alvherre@xxxxxxxxxxxxxx> wrote:
On 2021-Oct-08, Alvaro Herrera wrote:
> On 2021-Oct-08, Wells Oliver wrote:
>
> > Dug out some more logging:
>
> > 2021-10-08 20:35:08 UTC::@:[12682]:LOG: server process (PID 3970) was terminated by signal 9: Killed
> > 2021-10-08 20:35:08 UTC::@:[12682]:DETAIL: Failed process was running: CREATE INDEX ...
>
> So what's happening here is that the instance is running out of RAM
> while creating some index, and the kernel is killing the process. I
> would probably blame the combination of shared_buffers=4GB with
> maintenance_work_mem=2GB, together with the instance's total RAM.
Also, maybe RDS could be smarter about this situation.
--
Álvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/
"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.
Escriba un guión que no toque nada para no causar daños." (Jakob Nielsen)
--
Wells Oliver
wells.oliver@xxxxxxxxx
--
Angular momentum makes the world go 'round.
Angular momentum makes the world go 'round.