Search Postgresql Archives

Re: Why facebook used mysql ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



From: pgsql-general-owner@xxxxxxxxxxxxxx [mailto:pgsql-general-owner@xxxxxxxxxxxxxx] On Behalf Of Sandeep Srinivasa
Sent: Tuesday, November 09, 2010 10:10 AM
To: Lincoln Yeoh
Cc: pgsql-general@xxxxxxxxxxxxxx
Subject: Re: Why facebook used mysql ?

 

hi,

   I am the OP.

 

With due respect to everyone (and sincere apologies to Richard Broersma), my intention was not to create a thread about MySQL/Oracle's business practices.

 

It was about the technical discussion on Highscalability - I have been trying to wrap my head around the concept of multiple core scaling for Postgres, especially beyond 8 core (like Scott's Magny Coeurs example). My doubt arises from  whether Postgres depends on the kernel scheduler for multiple CPU/core utilization. 

 

If that is the case, then does using FreeBSD vs Linux give rise to any differences in scaling?

 

Taking the question one step further, do different Linux kernels (and schedulers) impact Postgres scalability ? The Phoronix Test Suite already tests linux kernel releases for regressions in performance w.r.t postgres DB performance (e.g http://www.phoronix.com/scan.php?page=article&item=linux_perf_regressions&num=1), but doesnt particularly focus on multiple cores.

 

Is it something that should be benchmarked ?

 

thanks

-Sandeep

 

P.S. on the topic of scalability, here is another article - http://yoshinorimatsunobu.blogspot.com/2010/10/using-mysql-as-nosql-story-for.html , where people have asked if a similar thing can be done using Postgres UDF or a marshalling  ODBA  http://scm.ywesee.com/?p=odba/.git;a=summary

>> 

Regarding scaling, there is an interesting NoSQL engine called Kyoto cabinet that has some testing in high volume transactions under different loads and different conditions.

The Kyoto cabinet data engine is written by the Tokyo cabinet author Mr. Hirabayashi of Fallabs.com.  In this document:

http://fallabs.com/kyotocabinet/spex.html

We find something interesting.  In the section called Transaction, we have this:

=============================================================

default

risk on process crash: Some records may be missing.

risk on system crash: Some records may be missing.

performance penalty: none

remark: Auto recovery after crash will take time in proportion of the database size.

transaction

implicit usage: open(..., BasicDB::OAUTOTRAN);

explicit usage: begin_transaction(false); ...; end_transaction(true);

risk on process crash: none

risk on system crash: Some records may be missing.

performance penalty: Throughput will be down to about 30% or less.

transaction + synchronize

implicit usage: open(..., BasicDB::OAUTOTRAN | BasicDB::OAUTOSYNC);

explicit usage: begin_transaction(true); ...; end_transaction(true);

risk on process crash: none

risk on system crash: none

performance penalty: Throughput will be down to about 1% or less.

=============================================================

 

Notice that there is a 3:1 penalty for flushing request to disk from the program to the operating system, and a 100:1 penalty for hard flushing from the operating system to the disk.

So the simple way to scale to huge volumes is simply to allow data loss.  That is a major way in which NoSQL data systems can achieve absurd transaction rates.

 

There are also distributed hash tables (these are also called NoSQL engines, but they are an entirely different technology).  With a distributed hash table, you can get enormous scaling and huge transaction volumes.

http://en.wikipedia.org/wiki/Distributed_hash_table

Distributed hash tables are another kind of key/value store {but an entirely different technology compared to traditional key/value stores like DBM}.

 

When we design a data system, we should examine the project requirements and choose appropriate tools to solve the problems facing the project. 

 

For something like FaceBook, a Key/Value store is a very good solution.  You do not have a big collection of related tables, and there are no billion dollar bank transactions taking place where someone will get a bit bent out of shape if it goes missing.

 

For an analytic project where we plan to do cube operations, a column store like MonetDB is a good idea.

 

For a transactional system like “Point Of Sale” or Accounting, a traditional RDBMS like PostgreSQL is the best solution.

 

I think that an interesting path for growth would be to expand PostgreSQL to allow different table types.  For instance, all leaf tables (those tables without any children) could easily be Key/Value stores.  For analytics, create column store tables.  For ultra-high access, have a distributed hash table.

 

But right now, most RDBMS systems do not have these extra, special table types.  So if you want tools that do those things, then use those tools.

 

IMO-YMMV

<< 


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux