Search Postgresql Archives

Re: Does anyone use in ram postgres database?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks very, very much for this reply.  It is extremely useful.

So far, I have not run into anything remotely resembling a performance barrier in Postgres. I'm still looking :-)



On Mar 26, 2010, at 4:43 PM, Ozz Nixon wrote:

On 3/26/10 11:12 AM, John Gage wrote:
As a kind of [very?] dumb question, is this where SQLite has been used? I am just curious.
All questions are good ones, as that is how we all learn. ;-)

SQLite is useful for small foot print environments, along with simpler solutions like XBase (DBase) files. They tend to be quick and easy for implementation and usage, not robust for enterprise multi-user systems. (Not trying to stat a flame war, just the facts).

Enterprise engines are great for day to day transactional data flow, a few thousand reads with fewer writes. When you start to exceed writes to reads, then this is where you need to decide -- are those writes for audit and archive, or are those writes compounding the results of the reads.

If they are archive/history and audit as needed, this is where partitionable databases come to mind, or even simplistic text files (encrypted if needed).

If they are compounding your reads then the fork in the road appears... there are questions you have to ask yourself about the 'now' and '3 years from now' of your data. For example, the original statement was that running the SQL engine in RAM mode only handled 3 times more data requests, and that is not enough (I assume). There are probably database designs and query techniques that could improve your performance -- but does that answer the now or the 3 years from now need? We spend hours on each of our database designs, and our queries - and sometimes the queries force us to redesign the schema so we can milk out a few hundred more queries in our time of measurement (minutes, seconds, or hours).

We had an existing solution in place which was capable of processing 10,000 queries a minute. At the point of design, that was more than our customer thought of doing. 8 months later, they were starting to see waits on their processes for our solution. I spent the next 2 days redesigning a simple socket listener with the data in RAM using link-lists, hashes and returning it back in XML. Introduced 5 additional queries to improve the quality of the results, and delivered it to them handling over 100,000 queries a second now.

   So with that said, the questions become:

What does your schema look like now?

What are your writing into the database?

How often are you writing?

What are you searching for?

How often are you searching?

How large is the result set that is flowing across the ether?

There are times answer these questions, it is easier to see the problem is not the technology you are trying to leverage, but how you are using the technology. Then, there are times were you are trying to use the wrong technology. Answering those above will allow myself and the postgreSQL guru's to help you out.

* I use a wide range of SQL engines, depending upon budget, needs, etc. Along with developing custom solutions when the DB way is not tailored enough for a need. Hope that helps, and shows you, depending upon your needs for now and 36 months from now play a big roll in designs and re-designs.

O.

--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux