Search Postgresql Archives

Re: avoiding file system caching of a table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 02/16/2014 10:33 PM, Gabriel Sánchez Martínez wrote:
Is there a way of asking PostgreSQL to read the files of a table directly off the disk, asking the OS not to use the file cache? I am running PostgreSQL 9.1 on Ubuntu Server 64-bit. The server in question has the maximum amount of RAM it supports, but the database has grown much larger. Most of the time it doesn't matter, because only specific tables or parts of indexed tables are queried, and all of that fits in the file cache. But we have a new requirement of queries to a table several times larger than the total RAM, and the database has slowed down considerably for the other queries.

I am assuming that with every query to the large table, the OS caches the files containing the table's data, and since the table is larger than total RAM, all the old caches are cleared. The caches that were useful for other smaller tables are lost, and the new caches of the large table are useless because on the next query caching will start again from the first files of the table. Please point out if there is a problem with this assumption. Note that I am refering to OS file caching and not PostgreSQL caching.

Is there a way around this? I have read that there is a way of asking the OS not to cache a file when the file is opened. Is there a way of telling PostgreSQL to use this option when reading files that belong a specific table?

What about putting the table on a tablespace that is on a different device partition with the sync mount option? Would that help?

I have read forum postings saying that the sync option affects writes, and will not prevent reads from caching. At some forum posting I came across nocache, a utility for linux. It is used by typing "nocache <command>" in a shell. But I can't do that with a postgres process when a connection opens because postgres is the one opening the process. Does someone know a work-around, or a different solution to the problem? Shouldn't PostgreSQL be smart about this and based on the statistics collected for a table and on the query plan know the harm that will be done if all of a very large table's pages are read and flush the cache?


All suggestions will be appreciated.

Thanks,
Gabriel




--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux