Re: Plz Heeeelp! performance settings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The performance problem is really only on the insertion and even more on the treatment for the aggregation.

To treat the 3000 entrances and to insert, or update the tables it needs 10 minutes.

As I told you I inject 14000 query every 2 minutes, and it needs 10 minutes to treat 3000 of those query.

As you can easly understand it's a big narrow section.

I'm not doing the treatment in ones, cause I can't, but all is managed by procedure.

> That wasn't a yes/no question. Please choose one of:
> Are you updating 6Go per week? most of update
> Are you adding 6Go per week? less of injection,

This action depend if the data are already present in the database.


>
> OK. I assume you're happy with the plans you are getting on these
> queries, since you've not provided any information about them.

The plan seems ok as it use index as well.
here is the plan :

explain analyse SELECT "insertUpdateTracks"(137,2605, 852, ('2008-08-06 19:28:54'::text)::date,3,'dailydisplay',2,NULL);
INFO:  method 1
                                     QUERY PLAN
------------------------------------------------------------------------------------
Result (cost=0.00..0.01 rows=1 width=0) (actual time=1.151..1.151 rows=1 loops=1)
 Total runtime: 1.160 ms
(2 lignes)



 Has you can see the runtime processs for an update in this table.

multiplying this per 10000, it is too long.

regards

david


Richard Huxton a écrit :
dforums wrote:
vmstat is giving :
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 0 2 1540 47388 41684 7578976 0 0 131 259 0 1 9 3 82 7

This system is practically idle. Either you're not measuring it at a useful time, or there isn't a performance problem.

 >  > But
 >> if I use a second machine to replicate the database, I escape this
 >> problem isn't it ?
 > You reduce the chance of a single failure causing disaster.
Not clear this reply. It's scare me ....

If server A fails, you still have server B. If server A fails so that replication stops working and you don't notice, server B won't help any more.

 > What do you mean by "take 6Go per week"? You update/delete that much
 > data? It's growing by that amount each week?
YES

That wasn't a yes/no question. Please choose one of:
Are you updating 6Go per week?
Are you adding 6Go per week?

> I'm not sure what "15000 request per 2 minutes and empty it into 10 min"
 > means.
I insert 15000 datas every 2 min and delete 15000 every 10 min in those tables
 >
 > Do you have 7500 requests per minute?
should be that, But in fact I'm not treating the datas in real time, and I buffer the datas and push the data into the database every 2 min
 > Are these updates?
during the delete the data are aggregated in other tables which make updates

OK, so every 2 minutes you run one big query that adds 15000 rows.
Every 10 minutes you run one big query that deletes 15000 rows.

 > To the "temporary storage"?

 > What is this "temporary storage" - an ordinary table?
Yes, I thied to use temporary tables but I never been able to connect this tables over 2 different session/connection, seems that is a functionnality of postgresql, or a misunderstanding from me.

That's correct - temporary tables are private to a backend (connection).

 >  > I'm making some update or select on tables including more than 20
 >  > millions of entrance.
 >
 > Again, I'm not sure what this means.

To aggregate the data, I have to check the presence of others information that are stores in 2 tables which includes 24 millions of entrance.

OK. I assume you're happy with the plans you are getting on these queries, since you've not provided any information about them.

 > Oh - *important* - which version of PostgreSQL are you running?
8.1.11
 > Is an upgrade practical?
We are working of trying to upgrade to 8.3.3, but we are not yet ready for such migration

OK

 > Looking at your postgresql.conf settings:
 >
 >   max_connections = 624
 >
 > That's an odd number.
Now we could decrease this number, it's not so much usefull for now. we could decrease is to 350.

I don't believe you've got 350 active connections either. It will be easier to help if you can provide some useful information.

 >   effective_cache_size = 625000
 >
 > That's around 5GB - is that roughly the amount of memory used for
 > caching (what does free -m say for buffers/cache)?
            total       used       free     shared    buffers     cached
Mem:          7984       7828        156          0         38       7349
-/+ buffers/cache:        440       7544
Swap:          509          1        508

Not far off - free is showing 7349MB cached. You're not running 350 clients there though - you're only using 440MB of RAM.


I don't see anything to show a performance problem from these emails.


--
<http://www.1st-affiliation.fr>

*David Bigand
*Président Directeur Générale*
*51 chemin des moulins
73000 CHAMBERY - FRANCE

Web : htttp://www.1st-affiliation.fr
Email : david@xxxxxxxxxxxxxxxxxxx
Tel. : +33 479 696 685
Mob. : +33 666 583 836
Skype : firstaffiliation_support



[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux