On 9/7/2005 10:45 PM, Leonid Safronie wrote:
Hi, ppl Is there any way to do SELECTs with different priorities? Once a month I need to do some complex reports on table with over 7 billion rows, which implies several nested SELECTS and grouping (query runs over 20 minutes on P4/2.4GHz). Concurrently, there are over 50 processes updating tables in the same database, including table being SELECTed to do monthly report. The issue is that response time for these 50 processes is very important unlike for report generation, and time spent by these processes while report running is unacceptable for my production environment (response time grows from 1-3 seconds up to 1-2 minutes). Is there any way to give different priorities to different queries/transactions, as it's done for VACUUMing (vacuum_cost_* options in config file)?
Fiddling with Postgres backend priorities (nice) has bee beaten to death before, and was allways rejected for good reasons.
Which Postgres version is this? Everything pre-8.0 will suffer from complete cache eviction on sequential scans of huge tables.
Have you thought about replicating the database to a "reporting slave"? Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@xxxxxxxxx # ---------------------------(end of broadcast)--------------------------- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match