Hello Experts,
We have a requirement in which the query will be formed like below. We will have two partitioned tables joined and there may be filters used on both of the tables columns or it may be one of those.
These types of queries are very frequently used queries and critical to customers as these are part of search screens , so we want to have the indexing happen effectively to satisfy these types of queries to return rows in not more than ~1 seconds. The both tables are daily range partitions on column "part_date" and the volume of data per day/partitions will be ~700mllion in both the tables.
The customer can go searching for a duration starting from one days till max ~1 month of data i.e. part_date spanning for ~1 month duration. And the search should provide the latest transaction on the screen which is why "order by ..limit clause is used". "Offset" is used there because the customer can scroll through the next page on the UI where he has the capability to see the next 100 rows and so on. In the first screen it is also expected to see the count of the results , so that the customer can get an immediate idea about the total count of transactions he has matching his search criteria.
So ,
1)In the query below , if the optimizer chooses tab1 as the driving table, the index on just col1 should be enough or it should be (col1, tab1_id)? Similarly if it chooses the tab2 be the driving table then , index on (col2,tab2_id). Or just indexing the filtered column should be enough like individual indexes on COL1 and COL2 of table tab1 and tab2 respectively?
2)In scenarios where the customer has a lot of matching transactions (say in millions) post all the filters applied , and as the customer has to just see the latest 100 rows transaction data, the sorting will be a bottleneck. So what can be done to make such types of queries to return the latest transactions in quick time on the search screen?
3)As here also the count has to happen in the first step to make the customer know the total number of rows(which may be in millions), so what additional index will support this requirement?
Or if any other optimization strategy we can follow for catering to such queries?
select * from tab1, tab2
where tab1.part_date between '1-jan-2024' and '31-jan-2024'
and tab1.part_date=tab2.part_date
and tab1.tab1_id=tab2.tab2_id
and tab1.col1=<:input_col1>
and tab2.col2=<:input_col2>
order by tab1.create_timestamp desc
limit 100 offset 100;
These types of queries are very frequently used queries and critical to customers as these are part of search screens , so we want to have the indexing happen effectively to satisfy these types of queries to return rows in not more than ~1 seconds. The both tables are daily range partitions on column "part_date" and the volume of data per day/partitions will be ~700mllion in both the tables.
The customer can go searching for a duration starting from one days till max ~1 month of data i.e. part_date spanning for ~1 month duration. And the search should provide the latest transaction on the screen which is why "order by ..limit clause is used". "Offset" is used there because the customer can scroll through the next page on the UI where he has the capability to see the next 100 rows and so on. In the first screen it is also expected to see the count of the results , so that the customer can get an immediate idea about the total count of transactions he has matching his search criteria.
So ,
1)In the query below , if the optimizer chooses tab1 as the driving table, the index on just col1 should be enough or it should be (col1, tab1_id)? Similarly if it chooses the tab2 be the driving table then , index on (col2,tab2_id). Or just indexing the filtered column should be enough like individual indexes on COL1 and COL2 of table tab1 and tab2 respectively?
2)In scenarios where the customer has a lot of matching transactions (say in millions) post all the filters applied , and as the customer has to just see the latest 100 rows transaction data, the sorting will be a bottleneck. So what can be done to make such types of queries to return the latest transactions in quick time on the search screen?
3)As here also the count has to happen in the first step to make the customer know the total number of rows(which may be in millions), so what additional index will support this requirement?
Or if any other optimization strategy we can follow for catering to such queries?
select * from tab1, tab2
where tab1.part_date between '1-jan-2024' and '31-jan-2024'
and tab1.part_date=tab2.part_date
and tab1.tab1_id=tab2.tab2_id
and tab1.col1=<:input_col1>
and tab2.col2=<:input_col2>
order by tab1.create_timestamp desc
limit 100 offset 100;
Regards
Sud