Search Postgresql Archives

Fwd: Using Postgres to store high volume streams of sensor readings

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



   (I'm adding the discussion also to the Postgres list.)

On Fri, Nov 21, 2008 at 11:19 PM, Dann Corbit <DCorbit@xxxxxxxxx> wrote:
> What is the schema for your table?
> If you are using copy rather than insert, 1K rows/sec for PostgreSQL seems very bad unless the table is extremely wide.

   The schema is posted at the beginning of the thread. But in short
it is a table with 4 columns: client, sensor, timestamp and value, all
beeing int4 (integer). There is only one (compound) index on the
client and sensor...

   I gues the problem is from the index...


> Memory mapped database systems may be the answer to your need for speed.
> If you have a single inserting process, you can try FastDB, but unless you use a 64 bit operating system and compiler, you will be limited to 2 GB file size.  FastDB is single writer, multiple reader model.  See:
> http://www.garret.ru/databases.html
>
> Here is output from the fastdb test program testperf, when compiled in 64 bit mode (the table is ultra-simple with only a string key and a string value, with also a btree and a hashed index on key):
> Elapsed time for inserting 1000000 record: 8 seconds
> Commit time: 1
> Elapsed time for 1000000 hash searches: 1 seconds
> Elapsed time for 1000000 index searches: 4 seconds
> Elapsed time for 10 sequential search through 1000000 records: 2 seconds
> Elapsed time for search with sorting 1000000 records: 3 seconds
> Elapsed time for deleting all 1000000 records: 0 seconds
>
> Here is a bigger set so you can get an idea about scaling:
>
> Elapsed time for inserting 10000000 record: 123 seconds
> Commit time: 13
> Elapsed time for 10000000 hash searches: 10 seconds
> Elapsed time for 10000000 index searches: 82 seconds
> Elapsed time for 10 sequential search through 10000000 records: 8 seconds
> Elapsed time for search with sorting 10000000 records: 41 seconds
> Elapsed time for deleting all 10000000 records: 4 seconds
>
> If you have a huge database, then FastDB may be problematic because you need free memory equal to the size of your database.
> E.g. a 100 GB database needs 100 GB memory to operate at full speed.  In 4GB allotments, at $10-$50/GB 100 GB costs between $1000 and $5000.

   Unfortunately the database will be too large (eventually) to store
all of it inside the memory...

   For the moment, I don't think I'll be able to try FastDB... Il put
it on my reminder list...


> MonetDB is worth a try, but I had trouble getting it to work properly on 64 bit Windows:
> http://monetdb.cwi.nl/

   I've heard of MonetDB -- it's from the same family as
Hypertable... Maybe I'll give it a try after I finish with SQLlite...

   Ciprian Craciun.

-- 
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux