Re: Millions of tables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Consider the problem though.  Random access to trillions of records with no guarantee any one will be fetched twice in a short time frame nullifies the effectiveness of a cache unless the cache is enormous.  If such a cache were that big, 100's of TB's, I wouldn't be looking at on-disk storage options.  :)

-Greg

On Mon, Sep 26, 2016 at 6:54 AM, Yves Dorfsman <yves@xxxxxxxxx> wrote:
Something that is not talked about at all in this thread is caching. A bunch
of memcache servers in front of the DB should be able to help with the 30ms
constraint (doesn't have to be memcache, some caching technology).

--
http://yves.zioup.com
gpg: 4096R/32B0F416



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[Postgresql General]     [Postgresql PHP]     [PHP Users]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Yosemite]

  Powered by Linux