Hi, If you have big table you could also think about Hadoop/HBase or Cassandra but do not put large data set in MySQL. I agree with Bill that "Despite the fact that lots of people have been able to make it (MySQL) work" (me too, another example), there are issues with it. I have been using MySQL for a number of years, using it to handle large DBs with large number of users, the MySQL is the bottleneck, especially when running table joins for large data set, CPU and I/O load went up ...... If switching to PostgreSQL, PostgreSQL 9.1.x is very good choice for production deployment. Thanks Tony P.S. Today I did some stress tests on my PostgreSQL staging server: a) insert 2 billions records into the test table, b) full scan the table. here are some test results: Facts: Number of records: 2 billions records inserted today Full table scan: about 16.76 minutes to scan 2 billions of rows, really AMAZING! Database size: 109GB PostgrSQL: 9.2.1 Physical RAM: 8GB CPU: i5 ######## EXPLAIN ANALYZE SELECT COUNT(*) FROM test; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=33849559.60..33849559.61 rows=1 width=0) (actual time=1006476.308..1006476.309 rows=1 loops=1) -> Seq Scan on test (cost=0.00..28849559.28 rows=2000000128 width=0) (actual time=47.147..903264.427 rows=2000000000 loops=1) Total runtime: 1006507.963 ms On 11 Dec 2012, at 8:27 PM, Bill Moran wrote:
|