On Mon, Jun 26, 2023 at 5:47 AM Avin Kavish <avin@xxxxxxxxxxxx> wrote:
Sounds like the problem you are having is, the server is running out of temporary resources for the operation that users are trying to do. So according to Tom, on the postgres side, the operation cannot be optimized further.
I think you have few choices here,- See if increasing the resources of the server will allow them to run the operation- Ask users not to do that operation- Use a extension like citus to scale horizontally
But I'm thinking why a massively inefficient join is needed in the first place. Shouldn't joins be for following keyed relationships. So ideally a unique indexed column, but at the very least an indexed column. Why is a join required on a dynamically calculated substring? Can it be made into a static computed value and indexed? Substring sounds like an op that should be in the filter stage.
Can you describe your data model? Maybe we can give some specific advice.
There is a set of big tables containing fine grain health data. The DB is accessed by people doing research on various kind of sicknesses. So, by nature, all columns (I mean ALL) can be used for every kind of SQL including, obviously, lots of intricated joins.
This sounds like the kind of problem normally solved by data warehouses. Is your schema designed like a DW, or is it in 3NF?
--
Born in Arizona, moved to Babylonia.
Born in Arizona, moved to Babylonia.