Search Postgresql Archives

Re: 2 million queries against a table

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 7/15/20 10:10 AM, Adam Sanchez wrote:
Hi

I need to run 2 million queries against a three columns table t
(s,p,o) which size is 10 billions rows. The data type of each column
is string.  The server has 512G RAM, 32 cores and 14T SSD (RAID 0)

Only two types of queries:

select s p o from t where s = param
select s p o from t where o = param

What's the index cardinality of "s" and "o" (about how many records per key)?

What kind of indexes do you have on them?  Is the table clustered on one of the keys?

What version of Postgresql?

If I store the table in a Postgresql database takes 6 hours using a
Java ThreadPoolExecutor.

How many threads?

What values of:
shared_buffers
temp_buffers
work_mem

Do you think Postgresql itself can speed up the queries processing
even more? What would be the best strategy?

2M queries in 6 hours is 93 queries/second.  Over 32 cores, that's only three per second.   Not very much.

These are my ideas:

1. Use Spark to launch queries against the table (extracted from
Postgresql) loaded in a dataframe
2. Use PG-Strom, an extension module of PostgreSQL with GPU support
and launch the queries against the table.


Any suggestion will be appreciated

IO is -- as usual -- the bottleneck, followed closely by cache efficiency.  Are you issuing the queries in a random order, or sequentially by key (which would enhance cache efficiency)?

--
Angular momentum makes the world go 'round.





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Postgresql Jobs]     [Postgresql Admin]     [Postgresql Performance]     [Linux Clusters]     [PHP Home]     [PHP on Windows]     [Kernel Newbies]     [PHP Classes]     [PHP Books]     [PHP Databases]     [Postgresql & PHP]     [Yosemite]

  Powered by Linux