Re: database tuning

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 11 Dec 2007, kelvan wrote:

i am going to blow up that mac and burn postgres as i need a more powerful dbms one that can handle multi threading.

Someone pointed this out already, but I'll repeat: PostgreSQL has a multi-process architecture that's fully capable of taking advantage of multiple CPUs. Whether a multi-process model is better or worse than a multi-threaded one is a popular subject to debate, but it's certainly not true that switching to threads will always give a performance benefit, and you shouldn't expect a large one--processes and threads are not that different. As a simple example benchmarks usually show the multi-process PostgreSQL scales better to high client loads than the multi-threaded MySQL.

The only spot where PostgreSQL has a clear performance limitation is that no single query can be split among multiple processors usefully. Since you seem to be working for many users doing small tasks rather than a single large one, I wouldn't expect the scalability of the core database code to be your problem.

as it stands we cannot configure postgres on a mac to go over 200 connections for god knows what reason but we have found ways around that using the mac

In a web application environment, there is no good reason to have that many individual database connections. You should consider the fact that you had trouble getting more than 200 going a warning sign. The right way to deal with this is not to work around it, but to use some sort of connection pooling software instead. You might use something that does PostgreSQL-level pooling like PgBouncer https://developer.skype.com/SkypeGarage/DbProjects/PgBouncer or you could do higher level caching with something like memcached http://www.danga.com/memcached/

so as you can see ill need as fast a read time for every query as possible. i am using alot of codes using small int and bit in my database and de-normalising everying to keep the cnnections down and the data read ammount down but that can only do so much.

What you should be worried about here is how much of the database you can cram into memory at once. Have you looked into using large amounts of memory for shared_buffers? In your situation you should consider putting multiple GB worth of memory there to hold data. Particularly with a single disk, if you even get to the point where you need to read from disk regularly you're not going to get anywhere close to your performance goals.

--
* Greg Smith gsmith@xxxxxxxxxxxxx http://www.gregsmith.com Baltimore, MD

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to majordomo@xxxxxxxxxxxxxx so that your
      message can get through to the mailing list cleanly

[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux