dforums wrote:
vmstat is giving :
procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
r b swpd free buff cache si so bi bo in cs us sy
id wa
0 2 1540 47388 41684 7578976 0 0 131 259 0 1 9
3 82 7
This system is practically idle. Either you're not measuring it at a
useful time, or there isn't a performance problem.
> > But
>> if I use a second machine to replicate the database, I escape this
>> problem isn't it ?
> You reduce the chance of a single failure causing disaster.
Not clear this reply. It's scare me ....
If server A fails, you still have server B. If server A fails so that
replication stops working and you don't notice, server B won't help any
more.
> What do you mean by "take 6Go per week"? You update/delete that much
> data? It's growing by that amount each week?
YES
That wasn't a yes/no question. Please choose one of:
Are you updating 6Go per week?
Are you adding 6Go per week?
> I'm not sure what "15000 request per 2 minutes and empty it into 10 min"
> means.
I insert 15000 datas every 2 min and delete 15000 every 10 min in those
tables
>
> Do you have 7500 requests per minute?
should be that, But in fact I'm not treating the datas in real time, and
I buffer the datas and push the data into the database every 2 min
> Are these updates?
during the delete the data are aggregated in other tables which make
updates
OK, so every 2 minutes you run one big query that adds 15000 rows.
Every 10 minutes you run one big query that deletes 15000 rows.
> To the "temporary storage"?
> What is this "temporary storage" - an ordinary table?
Yes, I thied to use temporary tables but I never been able to connect
this tables over 2 different session/connection, seems that is a
functionnality of postgresql, or a misunderstanding from me.
That's correct - temporary tables are private to a backend (connection).
> > I'm making some update or select on tables including more than 20
> > millions of entrance.
>
> Again, I'm not sure what this means.
To aggregate the data, I have to check the presence of others
information that are stores in 2 tables which includes 24 millions of
entrance.
OK. I assume you're happy with the plans you are getting on these
queries, since you've not provided any information about them.
> Oh - *important* - which version of PostgreSQL are you running?
8.1.11
> Is an upgrade practical?
We are working of trying to upgrade to 8.3.3, but we are not yet ready
for such migration
OK
> Looking at your postgresql.conf settings:
>
> max_connections = 624
>
> That's an odd number.
Now we could decrease this number, it's not so much usefull for now. we
could decrease is to 350.
I don't believe you've got 350 active connections either. It will be
easier to help if you can provide some useful information.
> effective_cache_size = 625000
>
> That's around 5GB - is that roughly the amount of memory used for
> caching (what does free -m say for buffers/cache)?
total used free shared buffers cached
Mem: 7984 7828 156 0 38 7349
-/+ buffers/cache: 440 7544
Swap: 509 1 508
Not far off - free is showing 7349MB cached. You're not running 350
clients there though - you're only using 440MB of RAM.
I don't see anything to show a performance problem from these emails.
--
Richard Huxton
Archonet Ltd