You'll have to provide more data: On Mon, Oct 21, 2019 at 7:14 AM Pawan Sharma <pawanpg0963@xxxxxxxxx> wrote: > Having real high CPU issue (95-98%), with SELECT statements and select queries contains multiple AND operator, is it will cause any CPU Spike..??? If you do a query the only reason it is not hitting 100% cpu is it is waiting for something, like disk or network or locked data, so with all data caches, fast networks ( or small results ) a standalone query will use 100% CPU during its run ( it must, if it is not doing it something is rotten in the scheduler ). If some set of queries is using 100% cpu during, i.e., 5 minutes, they may be asking for cached data that is normal. A proper question will be "x queries of type y are eating 95% CPU in my machine Z, with this type of data, I think it is not correct because blah-blah". An spike during the query execution is the correct outcome, CPU time is not like petrol, if you do not use it is lost, and careis taken doing things like paralell jobs to insure queries use as much CPU time as possible ( because typically a query needs, say, 5 core-seconds, if you use 10% cpu in an octacore that is 3.75 wall secs, if you use 100% it is 0.375 ). Also note locks/big result transmissions aside a fully cached DB is cpu-limited ( and would be for selects with complex conditions ), I would expect 100% usage if enough ( for 24 cpus ) clients are doing complex queries against a cached database. Your problem may be "it is using 100% of 24 cpus during a minute" where you think "it should be just a sec", but the 100% figure is better, you do not want your cpu to sit iddle. > apps team is using sub-partition, PG11, CPU:24, Mem: 16GB ... > effective_cache_size > ---------------------- > 22GB ... > max_worker_processes > ---------------------- > 8 I may be mislead, but isn't 16Gb a little spartan for 24 CPU with 8 workers ( per query ? ). Also, I assume the 22Gb is because you are accounting for a huge host cache. Francisco Olarte.