On Sat, Oct 10, 2015 at 10:00 PM, Adrian Klaver <adrian.klaver@xxxxxxxxxxx> wrote:
On 10/09/2015 08:30 PM, Victor Blomqvist wrote:
Note that these errors most of the time only happens very briefly at the
same time as the ALTER is run. When I did some experiments today the
server in total had around 3k req/s with maybe 0.1% of them touching the
table being updated, and the error then happens maybe 1-10% of the times
I try this operation. If I do the operation on a table with more load
the error will happen more frequently.
Out of curiosity more then any else, what happens if you ADD a column instead of DROP a column in the experiment?
The same behaviour. (Actually its more annoying than when it happens with DROPs since we do ADDs much more often)
Also, someone suggested me to try and recreate the functions returning
the table as well inside a transaction, but that did not change anything:
BEGIN;
ALTER TABLE...
CREATE OR UPDATE FUNCTION ...
END;
Thanks for your help so far!
/Victor
On Fri, Oct 9, 2015 at 10:49 PM, Adrian Klaveradrian.klaver@xxxxxxxxxxx <mailto:adrian.klaver@xxxxxxxxxxx><adrian.klaver@xxxxxxxxxxx <mailto:adrian.klaver@xxxxxxxxxxx>> wrote:
On 10/09/2015 07:31 AM, Albe Laurenz wrote:
Adrian Klaver wrote:
For the reason why this is happening see:
http://www.postgresql.org/docs/9.4/interactive/plpgsql-implementation.html#PLPGSQL-PLAN-CACHING
Yes, but the ALTER TABLE causes the plan to be recreated
the next time.
But does it? From the link above:
"Because PL/pgSQL saves prepared statements and sometimes
execution
plans in this way, SQL commands that appear directly in a
PL/pgSQL
function must refer to the same tables and columns on every
execution;
that is, you cannot use a parameter as the name of a table
or column in
an SQL command. To get around this restriction, you can
construct
dynamic commands using the PL/pgSQL EXECUTE statement — at
the price of
performing new parse analysis and constructing a new
execution plan on
every execution."
I see '*' as a parameter. Or to put it another way '*' is
not referring
to the same thing on each execution when you change the
table definition
under the function. Now if I can only get the brain to wake
up I could
find the post where Tom Lane explained this more coherently
then I can:)
Session 1:
test=> CREATE TABLE users (id integer PRIMARY KEY, name varchar
NOT NULL, to_be_removed integer NOT NULL);
CREATE TABLE
test=> CREATE FUNCTION select_users(id_ integer) RETURNS SETOF
users AS
$$BEGIN RETURN QUERY SELECT * FROM users WHERE id =
id_; END;$$ LANGUAGE plpgsql;
CREATE FUNCTION
Session 2:
test=> SELECT id, name FROM select_users(18);
id | name
----+------
(0 rows)
Ok, now the plan is cached.
Now in Session 1:
test=> ALTER TABLE users DROP COLUMN to_be_removed;
ALTER TABLE
Session2:
test=> SELECT id, name FROM select_users(18);
id | name
----+------
(0 rows)
No error. This is 9.4.4.
I stand corrected. I also tried on Postgres 9.3.7, which is a close
as I could get to OP's 9.3.5 and it worked. Will have to rethink my
assumptions.
Yours,
Laurenz Albe
--
Adrian Klaver
--
Adrian Klaver
adrian.klaver@xxxxxxxxxxx