Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 12/17/2010 11:08 AM, Tom Polak wrote:
So, I am back on this topic again.
I have a related question, but this might be the correct thread (and
please let me know that).  The boss is pressing the issue because of the
cost of MSSQL.

What kind of performance can I expect out of Postgres compare to MSSQL?
Let's assume that Postgres is running on Cent OS x64 and MSSQL is running
on Windows 2008 x64, both are on identical hardware running RAID 5 (for
data redundancy/security), SAS drives 15k RPM, dual XEON Quad core CPUs,
24 GB of RAM.  I have searched around and I do not see anyone ever really
compare the two in terms of performance.  I have learned from this thread
that Postgres needs a lot of configuration to perform the best.

We provide the MLS service to our members.  Our data goes back to 1997 and
nothing is ever deleted.  Here is a general overview of our current MSSQL
setup.  We have over 10GB of data in a couple of tables (no pictures are
stored in SQL server).  Our searches do a lot of joins to combine data to
display a listing, history, comparables, etc.  We probably do 3 or 4 reads
for every write in the database.

Any comparisons in terms of performance would be great.  If not, how can I
quickly truly compare the two systems myself without coding everything to
work for both?  Thoughts? Opinions?

Thanks,
Tom Polak
Rockford Area Association of Realtors
815-395-6776 x203

The information contained in this email message is intended only for the
use of the individual or entity named.  If the reader of this email is not
the intended recipient or the employee or agent responsible for delivering
it to the intended recipient, you are hereby notified that any
dissemination, distribution or copying of this email is strictly
prohibited.  If you have received this email in error, please immediately
notify us by telephone and reply email.  Thank you.

Although this email and any attachments are believed to be free of any
viruses or other defects that might affect any computer system into which
it is received and opened, it is the responsibility of the recipient to
ensure that it is free of viruses, and the Rockford Area Association of
Realtors hereby disclaims any liability for any loss or damage that
results.

Most of the time, the database is not the bottle neck. So find the spot where your current database IS the bottleneck. Then write a test that kinda matches that situation.

Lets say its 20 people doing an mls lookup at the exact same time, while and update is running in the background to copy in new data.

Then write a simple test (I use perl for my simple tests) for both databases. If PG can hold up to your worst case situation, then maybe you'll be alright.

Also: Are you pegged right now? Do you have slowness problems? Even if PG is a tad slower, will anybody even notice? Maybe its not worth worrying about? If your database isnt pegging the box, I'd bet you wont even notice a switch.

The other's that have answered have sound advice... but I thought I'd say: I'm using raid-5! Gasp!

Its true, I'm hosting maps with PostGIS, and the slowest part of the process is the arial imagery, which is HUGE. The database query's sit around 1% of my cpu. I needed the disk space for the imagery. The imagery code uses more cpu that PG does. The database is 98% read, though, so my setup is different that yours.

My maps get 100K hits a day. The cpu's never use more than 20%. I'm running on a $350 computer, AMD Dual core, with 4 IDE disks in software raid-5. On Slackware Linux, of course!

-Andy

--
Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux