On 1/30/25 18:45, Adrian Klaver wrote:
On 1/30/25 06:18, Achilleas Mantzios - cloud wrote:
Dear PostgreSQL people
We have a project having to do with capturing sensor data and alarms
from various machinery using various protocols/standards (e.g. NMEA ,
MODBUS). We have about 150 sites (vessels of various types) and each
site will generate about 500 rows per minute. We have solved most of
the design + issues regarding DB, data transfer to the central DB,
etc and now we have started to think about presenting them.
Ideally and eventually each one of those will be connected to some
machinery item defined in our inventory / live system, so the final
goal is to have smth like "mimics" and interactive UI that will let
the user have an overall view of the fleet (all vessels), spot the
red/orange ones, then click on a vessel, spot the red/orange areas,
then keep on zooming in until he/she finds the exact item that is in
trouble.
This is the rough idea.
But for now we are more interested in just viewing the incoming data
that will be basically tag/value pairs.
The types of those data will be divided in normal tag with a normal
float value, or alarm data which have about 2 discrete boolean
variables (valid and acknowledged). So we want to cover those two
cases as far as viewing is concerned.
The data will have minutely resolution. So if a vessel e.g. has 700
discrete tags then the system will generate and transfer to the
central DB 700 rows for every minute.
Any ideas? Graphs time series? We have graphs (Java) that show time
series with daily resolution. What would you guys recommend for
showing such data, coming from data acquisition systems ?
It seems there are two uses in play here:
1) Minute by minute data acquisition to track sensor and alarm output
in general.
2) Troubleshooting emerging problems.
For 2) you seem to already have the idea of drilling down on values
that have moved into the orange --> red area. That does not require
displaying all the data, just the data that enters the zone of interest.
For 1) the question is do you think folks are going to look at 700
values recorded on a minute by minute basis on 150 ships? Where that
information could help is supplying context for the problems arising
in 2), especially intermittent issues. I could see picking a problem
defined in 2) and allowing for the user to select both a time period
and related values over that period to help determine how the problem
developed.
Thank you Adrian!
Yes the ultimate goal would be to have a graphical overview where the
user will finally zoom in until he/she spots the exact single problem at
hand. Then view trends or other historical data that might have had
contributed or caused this specific event.
We took a look in timescaledb, and this seems to fit completely what we
need :
- postgresql
- time series data
- efficient storage, efficient querying
- out of the box partitioning
- etc
Now my problem is on the design . We have :
a) tags that have primitive values, float4 lets say - this is the
majority, e.g. 60% of all tags
b) tags that contain alarms data also with defined structure, which have
additional data such as time of the initial alarm set, acknowledgement
of this alarm , validity of this alarm. Those represent smth like 35% fo
all tags
c) tags that are basically polymorphic (about 11 of them all in all),
each one has different structure, and their fields/cols range a few (1)
up to many (25)
We have a table for a) and a table for b).
If we followed a strict normalized approach then we would create
additionally 11 tables each tag of type c) . And we are not guaranteed
that the same tags would have the same structure over the whole
fleet/manufacturers. So we are thinking of putting all semi-structured
data of tags of type c) into one table with a single col of type jsonb .
From what I read timescaledb plays nice with jsonb (or at least not bad).
Do you ppl see any gotcha with this approach ?
For starters we will not convert yet to timescaledb, but store them and
handle them like normal tables. At least until we grasp the ins and outs
of this.
Thank you!