On 7/17/19 7:59 AM, Volkan Unsal wrote:
Aha, it's due to the trigger, isn't it?
Yes.
On Wed, Jul 17, 2019 at 10:58 AM Volkan Unsal <spocksplanet@xxxxxxxxx
<mailto:spocksplanet@xxxxxxxxx>> wrote:
@Adrian
More information about my setup:
Postgres version:
PostgreSQL 10.9 (Debian 10.9-1.pgdg90+1) on x86_64-pc-linux-gnu,
compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
Table schema:
CREATE TABLE public.projects (
misc jsonb DEFAULT '{}'::jsonb NOT NULL
);
Explain analyze:
explain analyze update projects set misc = misc - 'foo';
Update on projects (cost=0.00..4240.93 rows=10314 width=1149)
(actual time=346318.291..346318.295 rows=0 loops=1)
-> Seq Scan on projects (cost=0.00..4240.93 rows=10314
width=1149) (actual time=1.011..266.435 rows=10314 loops=1)
Planning time: 40.087 ms
Trigger trigger_populate_tsv_body_on_projects: time=341202.492
calls=10314
Execution time: 346320.260 ms
Time: 345969.035 ms (05:45.969)
On Wed, Jul 17, 2019 at 10:39 AM Adrian Klaver
<adrian.klaver@xxxxxxxxxxx <mailto:adrian.klaver@xxxxxxxxxxx>> wrote:
On 7/17/19 7:30 AM, Volkan Unsal wrote:
> I'm trying to remove a key from a jsonb column in a table
with 10K rows,
> and the performance is abysmal. When the key is missing, it
takes 5
> minutes. When the key is present, it takes even longer.
>
> Test with non-existent key:
>
> >> update projects set misc = misc - 'foo';
> Time: 324711.960 ms (05:24.712)
>
> What can I do to improve this?
Provide some useful information:
1) Postgres version
2) Table schema
3) Explain analyze of query
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx <mailto:adrian.klaver@xxxxxxxxxxx>
--
*Volkan Unsal*
/Product Engineer/
volkanunsal.com <http://volkanunsal.com>
--
*Volkan Unsal*
/Product Engineer/
volkanunsal.com <http://volkanunsal.com>
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx