Search Postgresql Archives

Re: general questions postgresql performance config

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jan 25, 2010 at 9:15 AM, Dino Vliet <dino_vliet@xxxxxxxxx> wrote:
>
> Introduction
> Today I've been given the task to proceed with my plan to use postgresql and other open source techniques to demonstrate to the management of my department the usefullness and the "cost savings" potential that lies ahead. You can guess how excited I am right now. However, I should plan and execute at the highest level because I really want to show results. I'm employed in the financial services.
>
> Context of the problem
> Given 25 million input data, transform and load 10 million records to a single table DB2 database containing already 120 million records (the whole history).

Are these rows pretty wide?  or are they narrow?  Matters a lot.
120Million records of ~100 or so bytes each are gonna load a lot
quicker than 120Million records of 2,000 bytes, which will be faster
than rows of 20,000 bytes, and so on.

> The current process is done on the MVS mainframe while the SAS system is used to process the records (ETL like operations). The records of the two last months (so 20 million records) are also stored in a single SAS dataset, where users can access them through SAS running on their Windows PC's. With SAS PC's they can also analyse the historical records in the DB2 table on the mainframe.

This sounds like you're gonna want to look into partitioning your
postgresql database.   Follow the manual's advice to use triggers not
rules to implement it.

> These users are not tech savvy so this access method is not very productive for them but because the data is highly valued, they use it without complaining too much.
>
> Currently it takes 5 to 6 hours before everything is finished.

The import or the user reports?  I assume the import process.

> Proof of concept
> I want to showcase that a solution process like:
>
> input-->Talend/Pentaho Kettle for ETL-->postgresql-->pentaho report designer, is feasible while staying in the 5~6 hours processing and loading time.

Keep in mind that if a simple desktop PC can run this in 24 hours or
something like that, you can expect a server class machine with a
decent RAID array to run it in some fraction of that.

> Input: flat files, position based
> ETL: Pentaho Kettle or Talend to process these files
> DBMS: postgresql 8 (on debian, opensuse, or freebsd)
> Reporting: Pentaho report wizard

Make sure and step up to at LEAST postgresql 8.3.latest.  8.4 doesn't
have tons of performance improvements, but it does have tons of
functional improvements that may make it worth your while to go to it
as well.

> Hardware
>
> AMD AM2 singlecore CPU with 4GB RAM
> Two mirrored SATA II disks (raid-0)

So, definitely a proof of concept on a workstation type machine.  Be
careful, if the workstation runs the import or reports in some
fractional percentage of time that the big machines do, it may become
a server on the spot. (it's happened to me before.)  So consider
making that RAID-1 up there in case it does.

> Questions
> 1) Although this is not exactly rocket science, the sheer volume of the data makes it a hard task. Do you think my "solution" is viable/achievable?

Yes.  I've done similar on small workstation machines before and
gotten acceptable performance for reports that can run overnight.

> 2) What kind of OS would you choose for the setup I have proposed? I prefer FreeBSD with UFS2 as a filesystem, but I guess Debian with ext3 filesystems or openSUSE with ext3 or Ubuntu server with ext3 would all be very good candidates too??

You should use the flavor of linux you're most familiar with the
pitfalls of.  They've all got warts, no need to learn new ones because
some other flavor is more popular.  OTOH, if you've got a row of
cabinets running RHEL and a few RHEL sysadmins around, you can appeal
to their vanity to get them to help tune the machine you're running.

> 3) Would you opt for the ETL tools mentioned by me (pentaho and talend) or just rely on the unix/linux apps like gawk, sed, perl? I'm familiar with gawk. The ETL tools require java, so I would have to configure postgresql to not use all the available RAM otherwise risking the java out of memory error message. With that said, it would be best if I first configure my server to do the ETL processing and then afterwards configure it for database usage.

I'd use unix tools myself.  gawk, sed, p(erl/ython/hp) are all great
for tossing together something that works quickly.  If you need more
flexibility then look at ETL tools later, unless you're already
familiar enough with one to spend the time getting it setup and
running.

> 4) what values would you advice for the various postgresql.conf values which can impact performance like shared buffers, temp_buffers, sort_mem, etc etc? Or is this more of like an "art" where I change and restart the db server, analyze the queries and iterate until I find optimal values?

Go here: http://www.westnet.com/~gsmith/content/postgresql/

> 5) Other considerations?

Get some ideas of what hardware you can afford to throw at this if it
goes live.  An 8 core 16 disk SAS server with 8 or 16 Gig of ram is
now in the $7000 range or below.  For that kind of money you can get a
serious performer.  Don't forget to get the best hardware RAID
controller you can afford, that gets good performance on your OS.

Use monitoring tools like iotop, iostat, vmstat, top, and so on to get
an idea what is working your system the hardest so you'll know how to
tune it.

When you get it working, and need it to go faster, post a new message
to pgsql-perform.

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux