Hi all,
I am working on project to migrate PostgreSQL from V8.2 to 9.0 and optimise the new DB
has any one done some thing like that before ?
my main Task is the Optimisation part so please share some thoughts
Regards
Hany
On Wed, Sep 7, 2011 at 1:03 PM, Craig Ringer <ringerc@xxxxxxxxxxxxx> wrote:
On 7/09/2011 2:31 AM, Anibal David Acosta wrote:Hi everyone,
My question is, if I have a table with 500,000 rows, and a SELECT of one row is returned in 10 milliseconds, if the table has 6,000,000 of rows and everything is OK (statistics, vacuum etc)
can i suppose that elapsed time will be near to 10?
It's not that simple. In addition to the performance scaling Craig James mentioned, there are cache effects.
Your 500,000 row index might fit entirely in RAM. This means that no disk access is required to query and search it, making it extremely fast. If the index on the larger table does NOT fit entirely in RAM, or competes for cache space with other things so it isn't always cached in RAM, then it might be vastly slower.
This is hard to test, because it's not easy to empty the caches. On Linux you can the the VM's drop_caches feature, but that drops *all* caches, including cached disk data from running programs, the PostgreSQL system catalogs, etc. That makes it a rather unrealistic test when the only thing you really want to remove from cache is your index and the table associated with it.
The best way to test whether data of a certain size will perform well is to create dummy data of that size and test with it. Anything else is guesswork.
--
Craig Ringer