Re: caching table/query

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Tsirkin Evgeny wrote:


Well ,here is something that might be interesting:
trying to reduce query time and since that is a web application i now using caching with perl MLDBM::Sync::SDBM_File on application level .(Basically that involves quering all the table and putting it into MLDBM::Sync::SDBM_File and then quering it back ) .What is interesting is that while time quering the tables from postgres is taken 0.3 sec. using sdbm
it takes 0.1 sec.

Note too you have authentication and network latency issues when querying a remote server. Also PostgreSQL does a lot besides just storing your data. You have permissions enforcement, and a wide variety of other things. So it si unreasonable to assume that PostgreSQL will be as fast as any simple storage mechanism for simple data storage.

Actually i am not anymore sure that postgres does NOT cache table in memory, maybe it does and i don't know about it?Maybe the time is spended in transmitting the data
from postgres to the application? How can i test this?

Normally, PostgreSQL will try to intelligently cache information in memory. The OS also will generally cache disk reads, so even if it expires from PostgreSQL's cache, it may be in the OS disk cache. So yes, it tries to cache material in memory automatically.

If you want to try to force PostgreSQL to always cache such a table, I guess it would be possible but any solution you come up with will have three characteristics: 1) It will reduce the general memory available for other database operations (possibly causing things to page to disk including possibly your cached copy) and this may reduce general database performance, 2) It will be messy and bolted on and 3) It will likely be brittle-- I don't want to know how easily you can recover things in the event of a power outage. PLEASE DON'T DO THIS but I think one could use a tablespace on a ram disk with some sort of sync'ing utility (rules/triggers) and intelligent copying of files to and from disk before startup and after shutdown of the database server. However, you are on your own in the event of a power outage....

In general, Tom's advice is very good: Build your application first, and expect that PostgreSQL's optimizer will do a reasonable job of optimizing your data access. Then when you have bottlenecks, look for ways around these. Do as *little as possible* to optimize things and look for the simplest steps to remove your bottlenecks. This will ensure that your application and data are maintainable down the road. The optimizer is actually quite good. Use it as your primary tool of performance enhancement.

Remember Occam's Razor: "One should not needlessly multiply entities." (the most common translation I have seen regarding what William of Occam actually said.) This statement is a fabulous engineering principle.

Best Wishes,
Chris Travers
Metatron Technology Consulting


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux