On Sun, May 15, 2011 at 2:08 PM, Josh Berkus <josh@xxxxxxxxxxxx> wrote: >> All true. I suspect that in practice the different between random and >> sequential memory page costs is small enough to be ignorable, although >> of course I might be wrong. > > This hasn't been my experience, although I have not carefully measured > it. In fact, there's good reason to suppose that, if you were selecting > 50% of more of a table, sequential access would still be faster even for > an entirely in-memory table. > > As a parallel to our development, Redis used to store all data as linked > lists, making every object lookup effectively a random lookup. They > found that even with a database which is pinned in memory, creating a > data page structure (they call it "ziplists") and supporting sequential > scans was up to 10X faster for large lists. > > So I would assume that there is still a coefficient difference between > seeks and scans in memory until proven otherwise. Well, anything's possible. But I wonder whether the effects you are describing might result from a reduction in the *number* of pages accessed rather than a change in the access pattern. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance