Search Postgresql Archives

Re: Postgres as In-Memory Database?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Jeff and Martin

On 18. November 2013 17:44 Jeff Janes <jeff.janes@xxxxxxxxx> wrote:
> I rather doubt that.  All the bottlenecks I know about for well cached read-only workloads are around 
> locking for in-memory concurrency protection, and have little or nothing to do with secondary storage.  

Interesting point. But I think this is only partially the case - as Stonebraker asserts [1]. While I don't see how to speed-up locking (and latching), AFAIK there is quite some room for enhancement in buffer pooling (see also [2]). Especially in GIS environments there are heavy calculations and random access operations - so buffer pool will play a role.

To Martin: Stonebraker explicitly supports my hypothesis that in-memory databases become prevalent in the future and that "elephants" will be challenged if they don't adapt to new architectures, like in-memory and column stores.

The specific use case here is a PostGIS query of an OpenStreetMap data of the whole world (see [3]).

On 2013/11/18 Jeff Janes <jeff.janes@xxxxxxxxx> wrote:
>> On Sun, Nov 17, 2013 at 4:02 PM, Stefan Keller <sfkeller@xxxxxxxxx> wrote:
>> BTW: Having said (to Martijn) that using Postgres is probably more efficient, than programming an in-memory 
> database in a decent language: OpenStreetMap has a very, very large Node table which is heavily 
> used by other tables (like ways) - and becomes rather slow in Postgres.
Do you know why it is slow?  I'd give high odds that it would be a specific implementation detail in 
> the code that is suboptimal, or maybe a design decision of PostGIS, rather than some high level 
> architectural decision of PostgreSQL.

Referring to the application is something you can always say - but shouldn't prevent on enhancing Postgres.
PostGIS extension isn't involved in this use case. In this use case it's about handling a very huge table with a bigint id and two numbers representing lat/lon. As I said, an obvious solution is to access the tupels as fixed length records (which isn't a universal solution - but exploiting the fact that's in-memory).

You can replicate this use case by trying to load the planet file into Postgres using osm2pgsql (see [2]). The record currently is about 20 hours(!) I think with 32GB and SSDs.

--Stefan


[1] Michael Stonebraker: “The Traditional RDBMS Wisdom is All Wrong”:
http://blog.jooq.org/2013/08/24/mit-prof-michael-stonebraker-the-traditional-rdbms-wisdom-is-all-wrong/
[2] Oracle Database In-Memory Option - A Preview: In-Memory Acceleration for All Applications
http://www.oracle.com/us/corporate/features/database-in-memory-option/index.html
[3] osm2pgsql benchmark: 
http://wiki.openstreetmap.org/wiki/Osm2pgsql/benchmarks 

2013/11/18 Jeff Janes <jeff.janes@xxxxxxxxx>
On Sun, Nov 17, 2013 at 4:02 PM, Stefan Keller <sfkeller@xxxxxxxxx> wrote:
2013/11/18 Andreas Brandl <ml@xxxxxxxxxxxxxx> wrote:
What is your use-case?

It's geospatial data from OpenStreetMap stored in a schema optimized for PostGIS extension (produced by osm2pgsql).

BTW: Having said (to Martijn) that using Postgres is probably more efficient, than programming an in-memory database in a decent language: OpenStreetMap has a very, very large Node table which is heavily used by other tables (like ways) - and becomes rather slow in Postgres.


Do you know why it is slow?  I'd give high odds that it would be a specific implementation detail in the code that is suboptimal, or maybe a design decision of PostGIS, rather than some high level architectural decision of PostgreSQL.

Cheers,

Jeff


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux