RE: Performance Optimisation - Identifying the correct DB

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

 

As I understand you should not have any issues with reading 200K records in postgres.  Unless your tables are being heavily written to at the same time or over indexed (or not indexed at all), you could also check how much your tables have ‘bloated’ to see whether or not you need to run a vacuum on it. 

 

SELECT schemaname, relname, n_live_tup, n_dead_tup, trunc(100*n_dead_tup/(n_live_tup+1))::float "ratio%",

to_char(last_autovacuum, 'YYYY-MM-DD HH24:MI:SS') as autovacuum_date,

to_char(last_autoanalyze, 'YYYY-MM-DD HH24:MI:SS') as autoanalyze_date

FROM pg_stat_all_tables

ORDER BY last_autovacuum;

 

This can help by showing you how many dead tuples you have laying around on your tables.  Also shows you the last time your tables were analyzed and autovacuumed by the vacuum daemon.

 

Hope that helps a bit,

Eshara

 

 

From: Renjith Gk <renjithgk@xxxxxxxxx>
Sent: Tuesday, April 23, 2019 9:07 AM
To: pgsql-admin@xxxxxxxxxxxxxxxxxxxx
Subject: Performance Optimisation - Identifying the correct DB

 

Hello Friends,

 

I am seeking a better suggestion for Identifying the correct DB as part of READ operation (fetching queries).

 

What is the optimal execution time for Reading 200k records in Postgres. We had issues for reading records in cassandra which faced time out for ~200K records.

 

any ideal solution or recommendation towards Postgres.

 

Best regards,

Renjith


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux