> -----Mensaje original----- > De: pgsql-performance-owner@xxxxxxxxxxxxxx > [mailto:pgsql-performance-owner@xxxxxxxxxxxxxx] En nombre de dforums > Enviado el: Lunes, 11 de Agosto de 2008 11:27 > Para: Scott Marlowe; pgsql-performance@xxxxxxxxxxxxxx > Asunto: Re: [PERFORM] Distant mirroring > > Houlala > > I got headache !!! > > So please help...........;; > > "Assuming they all happen from 9 to 5 and during business > days only, that's about 86 transactions per second. Well > within the realm of a single mirror set to keep up if you > don't make your db work real fat." > > OK i like, But my reality is that to make an insert of a > table that have > 27 millions of entrance it took 200 ms. > so it took between 2 minutes and 10 minutes to treat 3000 > records and dispatch/agregate in other tables. And I have for > now 20000 records every 3 minutes. > You must try to partition that table. It should considerably speed up your inserts. > > So I need a solution to be able to 1st supporting more > transaction, secondly I need to secure the data, and being > able to load balancing the charge. > > Please, give me any advise or suggestion that can help me. > Have you taken into consideration programming a solution on BerkeleyDB? Its an API that provides a high-performance non-SQL database. With such a solution you could achieve several thousands tps on a much smaller hardware. You could use non-work hours to dump your data to Postgres for SQL support for reporting and such. Regards, Fernando