Richard Huxton a écrit :
> dforum wrote:
>> Tx for your reply.
>>
>> You mean that RAID use fsync method for keeping data's copy.
>
> No, Merlin means PostgreSQL will issue a sync to force WAL to actual
disk.
>
>> So you invite me to desactivate fsync to increase the performance ?
>
> He means you might have to if you can't afford new hardware. Is disk
> activity the problem? Have you looked at the output of "vmstat" to check?
vmstat is giving :
procs -----------memory---------- ---swap-- -----io---- --system--
----cpu----
r b swpd free buff cache si so bi bo in cs us
sy id wa
0 2 1540 47388 41684 7578976 0 0 131 259 0 1 9
3 82 7
>
>> Desactivating fsync. my second disk will not be uptodate,
>
> No - the RAID stuff is happening in the operating-system.
>
> > so if the
>> machine crash, I wont be able to get the server working quickly???
>
> Not "quickly", perhaps not "at all".
Oups
>
> > But
>> if I use a second machine to replicate the database, I escape this
>> problem isn't it ?
> You reduce the chance of a single failure causing disaster.
Not clear this reply. It's scare me ....
>
>> If I understand right, could you tell me how to do desactivate fsync
>> please ?
>
> There's an "fsync = on" setting in your postgresql.conf, but don't
> change it yet.
OK
>
> > I have a database of 38Go and take 6Go per week.
>
> What do you mean by "take 6Go per week"? You update/delete that much
> data? It's growing by that amount each week?
YES
>
> > I have a lot of update and insert, especially in 8 tables. 2
tables are
> > using for temporary storage, so I right something like 15000
request per
> > 2 minutes and empty it into 10 min.
>
> I'm not sure what "15000 request per 2 minutes and empty it into 10 min"
> means.
I insert 15000 datas every 2 min and delete 15000 every 10 min in those
tables
>
> Do you have 7500 requests per minute?
should be that, But in fact I'm not treating the datas in real time, and
I buffer the datas and push the data into the database every 2 min
> Are these updates?
during the delete the data are aggregated in other tables which make updates
> To the "temporary storage"?
> What is this "temporary storage" - an ordinary table?
Yes, I thied to use temporary tables but I never been able to connect
this tables over 2 different session/connection, seems that is a
functionnality of postgresql, or a misunderstanding from me.
>
> > I'm making some update or select on tables including more than 20
> > millions of entrance.
>
> Again, I'm not sure what this means.
To aggregate the data, I have to check the presence of others
information that are stores in 2 tables which includes 24 millions of
entrance.
>
>
> Oh - *important* - which version of PostgreSQL are you running?
8.1.11
> Is an upgrade practical?
We are working of trying to upgrade to 8.3.3, but we are not yet ready
for such migration
>
>
> Looking at your postgresql.conf settings:
>
> max_connections = 624
>
> That's an odd number.
Now we could decrease this number, it's not so much usefull for now. we
could decrease is to 350.
> Do you usually have that many connections?
> What are they doing? They can't all be active, the machine you've got
> wouldn't cope.
>
> shared_buffers = 250000
> work_mem = 9000
> temp_buffers = 500
>
> These three are important. The shared_buffers are workspace shared
> between all backends, and you've allocated about 2GB. You've also set
> work_mem=9MB, which is how much each backend can use for a single sort.
> That means it can use double or triple that in a complex query
(i now about it).
If
> you're using temporary tables, then you'll want to make sure the
> temp_buffers setting is correct.
I need help for that, I don't know
>
> I can't say whether these figures are good or bad without knowing how
> the database is being used.
>
> effective_cache_size = 625000
>
> That's around 5GB - is that roughly the amount of memory used for
> caching (what does free -m say for buffers/cache)?
total used free shared buffers cached
Mem: 7984 7828 156 0 38 7349
-/+ buffers/cache: 440 7544
Swap: 509 1 508
>
> max_prepared_transactions = 200
>
> Do you use a lot of prepared transactions in two-phase commit?
> I'm guessing that you don't.
I don't
>
> > I'm sure that it could be more optimised. I don't know any thing on
> > WAL,
> > autovacuum, fsm, bgwriter, kernel process, geqo or planner cost
> > settings.
>
> If you run a "vacuum verbose" it will recommend fsm settings at the end
> of its output. I think you probably need to make your autovacuum more
> aggressive, but that's something you'll be able to tell by monitoring
> your database.
>
> It's quite likely that Merlin's right, and you need better hardware to
> cope with the number of updates you're making - that's something where
> you need fast disks. However, he's just guessing because you've not told
> us enough to tell where the problem really lies.
>
Hope that new information will give you more information to help me.
Regards
david
--
<http://www.1st-affiliation.fr>
*David Bigand
*Président Directeur Générale*
*51 chemin des moulins
73000 CHAMBERY - FRANCE
Web : htttp://www.1st-affiliation.fr
Email : david@xxxxxxxxxxxxxxxxxxx
Tel. : +33 479 696 685
Mob. : +33 666 583 836
Skype : firstaffiliation_support