Performance issue. Can I restrict memory usage at session level in the script where I run Vacuum DB ? > On Apr 27, 2018, at 2:06 AM, Laurenz Albe <laurenz.albe@xxxxxxxxxxx> wrote: > > Sushil Shirodkar wrote: > >>>> On Thu, Apr 26, 2018 at 10:31 AM, Sushil Shirodkar <sushilps@xxxxxxxxxxx> wrote: >>>> Running "vacuumdb -a -z -v" from the cron on one of our test environment, and >>>> noticed that memory of the server goes down from 3.4GB free to 150MB. Once >>>> the process is over, memory is not released, is it normal or something needs to be >>>> changed ? also other processes start running slow afterwards due to low memory. >>> >>> How are you measuring free memory? Memory might be listed in cached/buffers instead of >>> free but is still available. Although that wouldn't then explain other processes being slow. >> >> I have put some small script which runs in a loop with "free -h" command, >> while I am running "vacuumdb". Once I clear by "sync" or bounce PG, >> everything runs normal afterward. > > Then I would say everything is fine. > It is normal for a Linux system to have almost no free memory; the memory is used > for the file system cache. > > Do you experience any problems, like reduced performance or high I/O? > > Yours, > Laurenz Albe > -- > Cybertec | https://www.cybertec-postgresql.com