Hi,
I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount of sensors. In order to have good performances on querying by timestamp on each sensor, I partitionned my measures table for each sensor. Thus I create a lot of tables.
I simulated a large sensor network with 3000 nodes so I have ~3000 tables. And it appears that each insert (in separate transactions) in the database takes about 300ms (3-4 insert per second) in tables where there is just few tuples (< 10). I think you can understand that it's not efficient at all because I need to treat a lot of inserts.
Do you have any idea why it is that slow ? and how can have good insert ?
My test machine: Intel p4 2.4ghz, 512mb ram, Ubuntu Server (ext3)
iostat tells me that I do : 0.5MB/s reading and ~6-7MB/s writing while constant insert
Here is the DDL of the measures tables:
-------------------------------------------------------
CREATE TABLE measures_0
(
"timestamp" timestamp without time zone,
storedtime timestamp with time zone,
count smallint,
"value" smallint[]
)
WITH (OIDS=FALSE);
CREATE INDEX measures_0_1_idx
ON measures_0
USING btree
((value[1]));
-- Index: measures_0_2_idx
CREATE INDEX measures_0_2_idx
ON measures_0
USING btree
((value[2]));
-- Index: measures_0_3_idx
CREATE INDEX measures_0_3_idx
ON measures_0
USING btree
((value[3]));
-- Index: measures_0_count_idx
CREATE INDEX measures_0_count_idx
ON measures_0
USING btree
(count);
-- Index: measures_0_timestamp_idx
CREATE INDEX measures_0_timestamp_idx
ON measures_0
USING btree
("timestamp");
-- Index: measures_0_value_idx
CREATE INDEX measures_0_value_idx
ON measures_0
USING btree
(value);
-------------------------------------------------------
Regards
Loïc Petit
I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount of sensors. In order to have good performances on querying by timestamp on each sensor, I partitionned my measures table for each sensor. Thus I create a lot of tables.
I simulated a large sensor network with 3000 nodes so I have ~3000 tables. And it appears that each insert (in separate transactions) in the database takes about 300ms (3-4 insert per second) in tables where there is just few tuples (< 10). I think you can understand that it's not efficient at all because I need to treat a lot of inserts.
Do you have any idea why it is that slow ? and how can have good insert ?
My test machine: Intel p4 2.4ghz, 512mb ram, Ubuntu Server (ext3)
iostat tells me that I do : 0.5MB/s reading and ~6-7MB/s writing while constant insert
Here is the DDL of the measures tables:
-------------------------------------------------------
CREATE TABLE measures_0
(
"timestamp" timestamp without time zone,
storedtime timestamp with time zone,
count smallint,
"value" smallint[]
)
WITH (OIDS=FALSE);
CREATE INDEX measures_0_1_idx
ON measures_0
USING btree
((value[1]));
-- Index: measures_0_2_idx
CREATE INDEX measures_0_2_idx
ON measures_0
USING btree
((value[2]));
-- Index: measures_0_3_idx
CREATE INDEX measures_0_3_idx
ON measures_0
USING btree
((value[3]));
-- Index: measures_0_count_idx
CREATE INDEX measures_0_count_idx
ON measures_0
USING btree
(count);
-- Index: measures_0_timestamp_idx
CREATE INDEX measures_0_timestamp_idx
ON measures_0
USING btree
("timestamp");
-- Index: measures_0_value_idx
CREATE INDEX measures_0_value_idx
ON measures_0
USING btree
(value);
-------------------------------------------------------
Regards
Loïc Petit