Re: Performance Problem with postgresql 9.03, 8GB RAM,Quadcore Processor Server--Need help!!!!!!!

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



how about your harddisks??

you could get a little help from a RAID10 SAS 15k disks. if you don't even have RAID, it would help a lot!

Lucas.

2011/11/8 Sam Gendler <sgendler@xxxxxxxxxxxxxxxx>


Sent from my iPhone

On Nov 7, 2011, at 7:21 PM, Mohamed Hashim <nmdhashim@xxxxxxxxx> wrote:

Hi all,

Thanks for all your responses.

Sorry for late response

Earlier we used Postgres8.3.10 with Desktop computer (as server) and configuration of the system (I2 core with 4GB RAM) and also the application was slow  i dint change any postgres config settings.

May be because of low config We thought the aplication is slow so we opted to go for higher configuration server(with RAID 1) which i mentioned earlier.

I thought the application will go fast but unfortunately there is no improvement so i tried to change the postgres config settings and trying to tune my queries wherever possible but still i was not able to..........improve the performance..


So will it helpful if we try GIST or GIN for integer array[] colum (source_detail) with enable_seqscan=off and default_statistics_target=1000?

Oh dear! Where to even begin? There is no way to suggest possible solutions without knowing a lot more about how things are currently configured and what, exactly, about your application is slow. Just to address your particular suggestions, increasing the default statistics target would only help if an explain analyze for a slow query indicates that the query planner is using inaccurate row count estimates for one or more steps in a query plan. Depending upon the frequency of this problem it may be better to increase statistics target just for individual columns rather than across the entire db cluster.  Setting enable_seqscan to off is almost never a good solution to a problem, especially db-wide. If the planner is selecting a sequential scan when an alternative strategy would perform much better, then it is doing so because your configuration is not telling the query planner accurate values for the cost of sequential access vs random access - or else the statistics are inaccurate causing it to select a seq scan because it thinks it will traverse more rows than it actually will.

In short, you need to read a lot more about performance tuning Postgres rather than taking stab-in-the-dark guesses for solutions. I believe it was pointed out that at least one query that is problematic for you is filtering based on the value of individual indexes of an array column - which means you actually need break those values into separate columns with indexes on them or create an index on column[x] so that the planner can use that. But if the problem is general slowness across your whole app, it is possible that the way your app uses the db access API is inefficient or you may have a misconfiguration that causes all db access to be slow. Depending on your hardware and platform, using the default configuration will result in db performance that is far from optimal. The default config is pretty much a minimal config.

I'd suggest you spend at least a day or two reading up on Postgres performance tuning and investigating your particular problems. You may make quite a bit of improvement without our help and you'll be much more knowledgable about your db installation when you are done. At the very least, please look at the mailing list page on the Postgres website and read the links about how to ask performance questions so that you at least provide the list with enough information about your problems that others can offer useful feedback. I'd provide a link, but I'm on a phone.

--sam



Regards
Hashim



On Fri, Nov 4, 2011 at 1:37 AM, Mario Weilguni <roadrunner6@xxxxxx> wrote:
Am 03.11.2011 17:08, schrieb Tomas Vondra:
On 3 Listopad 2011, 16:02, Mario Weilguni wrote:
<snip>

No doubt about that, querying tables using conditions on array columns is
not the best direction in most cases, especially when those tables are
huge.

Still, the interesting part here is that the OP claims this worked just
fine in the older version and after an upgrade the performance suddenly
dropped. This could be caused by many things, and we're just guessing
because we don't have any plans from the old version.

Tomas



Not really, Mohamed always said he has 9.0.3, Marcus Engene wrote about problems after the migration from 8.x to 9.x. Or did I miss something here?

Regards,
Mario



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance



--
Regards
Mohamed Hashim.N
Mobile:09894587678


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux