Search Postgresql Archives

Re: Ideas about presenting data coming from sensors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 2/26/25 18:29, Adrian Klaver wrote:
On 2/26/25 01:27, Achilleas Mantzios - cloud wrote:
Hi Again

Up to this day we have set the data acquisition system running for just one ship and writing the code to display the data. For less than 20 days we have 6M rows.

I gave a shot to timescale, installed locally as an extension, it seems much prettier than having to do all the partition mgmt by hand or other tools. However this seems more than a complete engine with its own workers, so this seems like something new and big which seems to me like something to commit to for a long time, something to invest, on top of the already 25+ commitment we have with PostgreSQL itself.

So this is serious decision, so ppl please share your stories with timescale .


I don't use timescale, so this will not be about specifics. It seems to me you are well on the way to answering your own question with the choices you presented:

a) '... it seems much prettier than having to do all the partition mgmt by hand or other tools.'

b) 'However this seems more than a complete engine with its own
workers, ...'

Either you do the work to build your own solution or you leverage off other folks work. The final answer to that comes down to what fits your situation. Which solution is easier to implement with the resources you have available. Either one is going to end up being a long term commitment.

Thank you Adrian for all your companion and contribution in this thread!

In haste I made some typos and maybe I was not well understood by potential readers. I mean we are a traditional PostgreSQL house for 25 years. I started this DB from scratch, and now the whole topology of postgresql servers (soon 200 in all 7 seas) has about than 60TB worth of data. Since day one, I have been compiling from source, the base postgres, the contrib, my own written functions, extra extensions. We have never relied on a commercial offering, official package, or docker image you name it. So now, it is the first time that I come across a situation where the package / extension in question is big, has somehow different doc style than the core postgres, I still cannot navigate myself into it, plus the concern: I know PostgreSQL will be here well after I retire, how about timescale? If they go out of business or no longer support newer postgresql versions, what do we do? Freeze the system for weeks, and move 100TB of data ? Employ some logical replication from timescale to native postgres somehow utilizing this new table "routing" rules that are available or will be available by the time? Hire some known PostgreSQL support company to do the job? Write my own data migration solution?

That's why I am asking for user experiences on timescale.








[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux