Re: Architecting a database

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Jun 25, 2010 at 3:36 PM,  <tony@xxxxxxxxxxxxxxxxxxx> wrote:
> I am in the process of moving a system that has been built around FoxPro
> tables for the last 18 years into a PostgreSQL based system.
>
> Over time I came up with decent strategies for making the FoxPro tables
> work well with the workload that was placed on them, but we are getting to
> the point that the locking mechanisms are causing problems when some of
> the more used tables are being written to.
>
> With the FoxPro tables I had one directory that contained the tables that
> had global data that was common to all clients. Things like documents that
> had been received and logged, checks that had been cut, etc. Then each
> client had his own directory which housed tables that had information
> relating to that specific client. Setting things up like this kept me from
> having any tables that were too terribly large so record addition and
> index creation were not very time consuming.
>
> I am wondering how I should architect this in PostgreSQL. Should I follow
> a similar strategy and have a separate database for each client and one
> database that contains the global data? With the dBase and ISAM tables I
> have a good idea of how to handle them since I have been working with them
> since dBASE originally came out. With the PostgreSQL type tables I am not
> so certain how the data is arranged within the one file. Does having the
> data all in one database allow PostgreSQL to better utilize indexes and
> caches or does having a number of smaller databases provide performance
> increases? In case it is important, there are 2000 clients involved, so
> that would be 2000 databases if I followed my current FoxPro related
> structure. Of course, I suppose it is always possible to combine a number
> of groups into a database if the number of databases is an issue.
>
> Tables within the client specific databases are generally name and address
> information as well as tables for 10 different types of accounts which
> require different structures and those tables hold anywhere from 10,000
> transactions a piece for some smaller groups and 1 million for larger
> groups. I believe we have read to write ratio of about 1 to 15.
>
> Thanks for any input.

congratulations.  I developed on foxpro for years and I can tell you
you've come to the right place: your porting process should be
relatively pain free.  foxpro had a couple of nice features that
aren't found in too many other places: expression indexes (which we
have) and first class queries (we have, if you count pl/pgsql).
foxpro was also an enormous headache on so many levels which is why I
assume you are here.  I've long harbored suspicion that Microsoft
enjoyed adding to those headaches rather than subtracting from them.

Others have answered the data organization question.  You definitely
want to use schemas to logically separate private application data
inside your database...this is the purpose of schemas basically.

Data in SQL tables is considered unordered (we have no concept of
recno) unless an explicit ordering criteria is given.  Direct access
to the tables (BROWSE) has no analog in SQL.  A query is sent to the
database, results are gathered, buffered, and sent back.  This is the
#1 thing you will have to get used to coming from dbase style coding.

Locking model in postgres is completely different (better).  Records
are implicitly locked by writing to them and the locks are released at
transaction end (optimistic locking plus). As a bonus, data doesn't
get corrupted when you break the rules =).

For backend data processing tasks I advise you to use pl/pgsql.
Coming from foxpro you should have no problems.  You are going to have
to replace your GUI and report generator.  First question is whether
or not go web...and following that which technologies to use.  You may
have already figured all this out but perhaps you haven't.    Foxpro
does have odbc connectivity so you may have entertained ideas of
simply moving your application w/o porting the code.   This may or may
not work (just a heads up) -- the performance of foxpro odbc
translation is not so great and some of your code won't translate
well.  If you didn't use foxpro for the front end, it's going to
depend on what you're using.

Once you get used to postgres and how it reads and writes data, don't
worry so much about performance.  As long as you avoid certain
paradigms postgres doesn't write, the performance of your new database
should absolutely nuke what you're used to, especially in the multi
user case.  You will have no problems on the backend -- it's the front
end where your main concerns should be.  good luck.

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux