Hello:
Sorry for disturbing again.
Some of my friends told me about cgroups, So I tried it first.
I found that cgroups can work for task such as wget.
But it can't work for my postgres process.
[root@cent6 Desktop]# cat /etc/cgconfig.conf
#
# Copyright IBM Corporation. 2007
#
# Authors: Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx>
# This program is free software; you can redistribute it and/or modify it
# under the terms of version 2.1 of the GNU Lesser General Public License
# as published by the Free Software Foundation.
#
# This program is distributed in the hope that it would be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
#
# See man cgconfig.conf for further details.
#
# By default, mount all controllers to /cgroup/<controller>
mount {
cpuset = /cgroup/cpuset;
cpu = /cgroup/cpu;
cpuacct = /cgroup/cpuacct;
memory = /cgroup/memory;
devices = /cgroup/devices;
freezer = /cgroup/freezer;
net_cls = /cgroup/net_cls;
blkio = /cgroup/blkio;
}
group test1 {
perm {
task{
uid=postgres;
gid=postgres;
}
admin{
uid=root;
gid=root;
}
} memory {
memory.limit_in_bytes=500M;
}
}
[root@cent6 Desktop]#
[root@cent6 Desktop]# service cgconfig status
Running
[root@cent6 Desktop]#
When I start postgres and run the above sql statement, It still consume too much memory. As if cgroups does not work.
Best Regards
2013/9/3 高健 <luckyjackgao@xxxxxxxxx>
Thanks, I'll consider it carefully.Best Regards2013/9/3 Jeff Janes <jeff.janes@xxxxxxxxx>
On Sun, Sep 1, 2013 at 6:25 PM, 高健 <luckyjackgao@xxxxxxxxx> wrote:The size of the data has little to do with it. Take your example as
>>To spare memory, you would want to use something like:
>
>>insert into test01 select generate_series,
>>repeat(chr(int4(random()*26)+65),1024) from
>>generate_series(1,2457600);
>
> Thanks a lot!
>
> What I am worrying about is that:
> If data grows rapidly, maybe our customer will use too much memory ,
an example. The database could have been nearly empty before you
started running that query. A hostile or adventurous user can craft
queries that will exhaust the server's memory without ever needing any
particular amount of data in data_directory, except maybe in the temp
tablespace.
So it is a matter of what kind of users you have, not how much data
you anticipate having on disk.
The parts of PostgreSQL that might blow up memory based on ordinary
disk-based tables are pretty well protected by shared_buffers,
temp_buffers, work_mem, maintenance_work_mem, etc. already. It is the
things that don't directly map to data already on disk which are
probably more vulnerable.
I've used ulimit -v on a test server (which was intentionally used to
> Is
> ulimit command a good idea for PG?
test things to limits of destruction), and was happy with the results.
It seemed like it would error out the offending process, or just the
offending statement, in a graceful way; rather than having random
processes other than the culprit be brutally killed by OOM, or having
the machine just swap itself into uselessness. I'd be reluctant to
use it on production just on spec that something bad *might* happen
without it, but if I started experiencing problems caused by a single
rogue process using outrageous amounts of memory, that would be one of
my first stops.
Experimentally, shared memory does count against the -v limit, and the
limit has to be set rather higher than shared_buffers, or else your
database won't even start.
Cheers,
Jeff