>The real answer here is that anything could be true for your workload, and >asking people on a mailing list to guess is a recipe for disappointment. >You probably need to do some real benchmarking, and PostgreSQL will be >slower at first, and you'll tune it, and it's LIKELY that you'll be able to >achieve parity, or close enough that it's worth it to save the $$$. But >you won't really know until you try it, I think. That is what I am really after. I know that it will be a lot of work, but at $15,000 for MSSQL server that is a lot of man hours. Before I invest a lot of time to do some real benchmarking I need to make sure it would be worth my time. I realize going into this that we will need to change almost everything expect maybe the simplest Select statements. > How big is your data set and how big is your working set? > Do you have a raid card? Is it properly configured? The data set can get large. Just think of a real estate listing. When we display a Full View, EVERYTHING must be pulled from the database. Sometimes we are talking about 75-100 fields if not more. We can have up to 300 members logged (we usually peak at about 30-50 requests per second) in the system at one time doing various tasks. The servers would be running on a RAID hardware solution, so it would all be offloaded from the CPU. I will have to check out RAID 10 for the next server. Thanks for all your help and opinions. Thanks, Tom Polak Rockford Area Association of Realtors The information contained in this email message is intended only for the use of the individual or entity named. If the reader of this email is not the intended recipient or the employee or agent responsible for delivering it to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this email is strictly prohibited. If you have received this email in error, please immediately notify us by telephone and reply email. Thank you. Although this email and any attachments are believed to be free of any viruses or other defects that might affect any computer system into which it is received and opened, it is the responsibility of the recipient to ensure that it is free of viruses, and the Rockford Area Association of Realtors hereby disclaims any liability for any loss or damage that results. -----Original Message----- From: Robert Haas [mailto:robertmhaas@xxxxxxxxx] Sent: Friday, December 17, 2010 11:38 AM To: Tom Polak Cc: pgsql-performance@xxxxxxxxxxxxxx Subject: Re: Compared MS SQL 2000 to Postgresql 9.0 on Windows On Fri, Dec 17, 2010 at 12:08 PM, Tom Polak <tom@xxxxxxxxxxxxxxxxxxxxxxxx> wrote: > What kind of performance can I expect out of Postgres compare to MSSQL? > Let's assume that Postgres is running on Cent OS x64 and MSSQL is running > on Windows 2008 x64, both are on identical hardware running RAID 5 (for > data redundancy/security), SAS drives 15k RPM, dual XEON Quad core CPUs, > 24 GB of RAM. I have searched around and I do not see anyone ever really > compare the two in terms of performance. I have learned from this thread > that Postgres needs a lot of configuration to perform the best. I think this is a pretty difficult question to answer. There are certainly people who are running databases on hardware like that - even databases much bigger than yours - on PostgreSQL - and getting acceptable performance. But it does take some work. In all fairness, I think that if you started on PostgreSQL and moved to MS SQL (or any other product), you'd probably need to make some adjustments going the other direction to get good performance, too. You're not going to compare two major database systems across the board and find that one of them is just twice as fast, across the board. They have different advantages and disadvantages. When you're using one product, you naturally do things in a way that works well for that product, and moving to a different product means starting over. Oh, putting this in a stored procedure was faster on MS SQL, but it's slower on PostgreSQL. Using a view here was terrible on MS SQL, but much faster under PostgreSQL. The real answer here is that anything could be true for your workload, and asking people on a mailing list to guess is a recipe for disappointment. You probably need to do some real benchmarking, and PostgreSQL will be slower at first, and you'll tune it, and it's LIKELY that you'll be able to achieve parity, or close enough that it's worth it to save the $$$. But you won't really know until you try it, I think. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance