Re: Large Database Design Help

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi, Henry,

Orion Henry wrote:

> 1) The database is very large, the largest table has 40 million tuples.

I'm afraid this doesn't qualify as '_very_ large' yet, but it
definitively is large enough to have some deep thoughts about it. :-)

> 1) The data is easily partitionable by client ID.  In an attempt to keep
> the indexes small and the inserts fast one table was made per client
> ID.  Thus the primary table in the database (the one with 40 million
> tuples) is really 133 tables each ending with a three digit suffix.  
> The largest of these client tables has 8 million of the 40 million
> tuples.   The system started with around a half dozen clients and is now
> a huge pain to manage with so many tables.  I was hoping new hardware
> and new postgres features would allow for this data to be merged safely
> into a single table.

It possibly is a good idea to merge them.

If you decide to keep them separated for whatever reason, you might want
to use schemas instead of three digit suffixes. Together with
appropriate named users or 'set search_path', this may help you to
simplify your software.

In case you want to keep separate tables, but need some reports touching
all tables from time to time, table inheritance may help you. Just
create a base table, and then inherit all user specific tables from that
base table. Of course, this can be combined with the schema approach by
having the child tables in their appropriate schemas.

> 2) The imports are not done inside of transactions.  I'm assuming the
> system designers excluded this for a reason.  Will I run into problems
> performing tens of thousands of inserts and updates inside a single
> transaction?

Yes, it should give you a huge boost. Every commit has to flush the WAL
out to disk, which takes at least one disk spin. So on a simple 7200 RPM
disk, you cannot have more than 120 transactions/second.

It may make sense to split such a bulk load into transactions of some
tens of thousands of rows, but that depends on how easy it is for your
application to resume in the middle of the bulk if the connection
aborts, and how much concurrent access you have on the backend.

> 3) The current code that bulk loads data into the database is a loop
> that looks like this:
> 
>                $result = exe("INSERT INTO $table ($name_str) SELECT
> $val_str WHERE NOT EXISTS (SELECT 1 FROM $table WHERE $keys)");
>                if ($result == 0)
>                {
>                        $result = exe("UPDATE $table SET $non_keys WHERE
> $keys");
>                } 
> Is there a faster way to bulk load data when it's not known ahead of
> time if it's a new record or an updated record?

Perhaps the easiest way might be to issue the update first. Update
returns a row count of the updated rows. If it is 0, you have to insert
the row.

This can even be encapsulated into a "before insert" trigger on the
table, which tries the update and ignores the insert if the update
succeeded. This way, you can even use COPY on the client side.

We're using this approach for one of our databases, where a client side
crash can result in occasional duplicates being COPYed to the table.

> Dual Opteron 246, 4 disk SCSI RAID5, 4GB of RAM

For lots non-read-only database workloads, RAID5 is a performance
killer. Raid 1/0 might be better, or having two mirrors of two disks
each, the first mirror holding system, swap, and the PostgreSQL WAL
files, the second one holding the data. Don't forget to tune the
postgresql settings appropriately. :-)

> # du -sh /var/lib/postgres/data/
> 16G     /var/lib/postgres/data/ 

Your database seems to be small enough to fit on a single disk, so the
two mirrors approach I described above will be the best IMHO.

> ( the current database is PG 7.4 - I intend to upgrade it to 8.1 if and
> when I do this refactoring )

This is a very good idea, 8.1 is miles ahead of 7.4 in many aspects.

> ( the current OS is Debian Unstable but I intend to be running RHEL 4.0
> if and when I do this refactoring )

This should not make too much difference.

HTH,
Markus
-- 
Markus Schaber | Logical Tracking&Tracing International AG
Dipl. Inf.     | Software Development GIS

Fight against software patents in EU! www.ffii.org www.nosoftwarepatents.org


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux