Search Postgresql Archives

number of tables limited over time (not simultaneous)?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We've settled upon a method for gathering raw statistics from widely scattered data centers of creating one sequence per-event, per minute.

Each process (some lapp, some shell, some python, some perl etc) can call a shell script which calls ssh->psql to execute a nextval('event') sequence. Periodically (every 2-10 minutes, depending on other factors) Another process picks up the value and inserts it into a permanent home.

We're only talking a few 7-10k calls per minute, but going to this from a query that does an update has saved a *huge* amount of overhead.

If I needed to a periodic dump and restore would only take a minute. This data is highly transient. More frequently than biweekly or so would be annoying though.

Aside from security concerns, did we miss something? Should I be worried we're going through ~60,000 sequences per day?

TIA,
dave





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]
  Powered by Linux