Hi Artur, I am owner of a database about War, Worcrime and Terroism with more then 1,6 TByte and I am already fscked... Am 2009-06-15 14:00:05, schrieb Artur: > Hi! > > We are thinking to create some stocks related search engine. > It is experimental project just for fun. > > The problem is that we expect to have more than 250 GB of data every month. I have only 500 MByte per month... > This data would be in two tables. About 50.000.000 new rows every month. arround 123.000 new rows per month > We want to have access to all the date mostly for generating user > requesting reports (aggregating). > We would have about 10TB of data in three years. > > Do you think is it possible to build this with postgresql and have any > idea how to start? :) You have to use a physical cluster like me. Searches in a Database of more then 1 TByte even under using "tablespace" and "tablepartitioning" let you run into performance issues... I have now splited my Database in chunks of 250 GByte using a Cluster of 1U Servers from Sun Microsystems. Currently I run 8 servers with one proxy. Each server cost me 2.300 Euro. Note: On Friday I have a meeting with a Sun Partner in Germany because a bigger project... where I have to increase the performance of my database servers. I have to calculate with 150.000 customers. Thanks, Greetings and nice Day/Evening Michelle Konzack Systemadministrator Tamay Dogan Network Debian GNU/Linux Consultant -- Linux-User #280138 with the Linux Counter, http://counter.li.org/ ##################### Debian GNU/Linux Consultant ##################### <http://www.tamay-dogan.net/> Michelle Konzack <http://www.can4linux.org/> c/o Vertriebsp. KabelBW <http://www.flexray4linux.org/> Blumenstrasse 2 Jabber linux4michelle@xxxxxxxxxxxxx 77694 Kehl/Germany IRC #Debian (irc.icq.com) Tel. DE: +49 177 9351947 ICQ #328449886 Tel. FR: +33 6 61925193
<<attachment: signature.pgp>>