On 24/02/2011 10:59 AM, Kalai R wrote:
We are going to design database for a large company with multi branches.
I am using PostgreSQL
For example,
I create a single database for all branches. After 10 years, database
size is too large.
What is "too large?"
If you keep PostgreSQL up to date, have periodic hardware updates, and
properly maintain & index the database, it won't necessarily become
"too" large at all.
If it does, your best option is usually going to be partitioning tables
into live- and history- sections, or partitioning by some
query-appropriate key like date. It is a good idea to think about these
issues when designing the app and database, rather than trying to hack
them in afterwards. You can get particularly significant benefits at low
effort cost by consistently using partial indexes to keep index sizes
down for queries that only touch "live" data.
I want to know ,at that time, the performance of the database is reduced
or not.
If you add more data, then all other things being equal the database
will get slower. How much? Hard to know, it depends so much on your
query patterns, your maintenance, etc. In any case, you would be crazy
to keep on running on 10 year old hardware with a 10 year old release of
PostgreSQL. In reality you must plan periodic updates, in which case
it's quite likely that the growth of hardware and software performance
will keep up with the data growth.
Is it possible to maintain in a single database for all years of data?
How much data do you expect to have? How fast will it grow?
--
Craig Ringer
Tech-related writing at http://soapyfrogs.blogspot.com/
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general