Kynn Jones wrote:
Hi! When it comes to programming SQL, my newbie approach is to write my
code in a file test.sql, which I test from within psql by using
my_db=> \i /some/path/test.sql
...and (once I'm satisfied with the code) copy and paste it to a different
file that has the SQL I've written so far for the project.
I'm not aware of any block comment facility like you desire, though
there could easily be one. It'd be handy, for sure. However, most good
text editors can do this for you anyway, so if yours can't maybe you
need to find a better programmer's editor (emacs, vim, Visual Studio's
editor, etc).
Personally I use vim to comment out small blocks. However, this is
rarely required as I break my SQL up into logical chunks in separate
files. If I need to concatenate a bunch of files I just produce a small
file for inclusion with psql's \i command or with the -f option that in
turn contains \i commands and often things like a wrapping BEGIN /
COMMIT block.
For example, there's a very long sequence of operations I perform while
testing some data migration SQL. Each step is in its own SQL file, but I
provide a wrapper to make it easier to run the whole lot. Sans
documentation and whitespace:
reimport.sql:
-------------
BEGIN;
\i truncate.sql
\i pre-fixups.sql
\i editions.sql
\i import_customers.sql
\i import_journal.sql
\i import_checks.sql
\i convert_bookings.sql
DELETE FROM import_info;
INSERT INTO import_info ( import_date, import_timestamp ) VALUES (
current_date, current_timestamp );
\i customer_merge.psql
COMMIT;
\i optimise.sql
----------------
Each SQL file is (relatively) small and self contained, but I can run
the whole process with a simple:
psql -f reimport.sql
or with \i reimport.sql . It's easy to do it step by step in psql, too,
examining the state after each step. If someone wants to run it without
using psql they can just concatenate the SQL files together in order for
the same effect.
If you organise your code well and break it up into sensible module it's
a pretty easy way to work, and not too different from how you probably
work in other languages.
One thing I find particularly important is to keep my function and
trigger definitions in a separate file to schema definitions. The
function/trigger file uses CREATE OR REPLACE FUNCTION and makes sure to
DROP TRIGGER ... IF EXISTS before using CREATE TRIGGER - so I can just
include it with \i in psql to load the latest function and trigger
definitions without having to muck about with the table structure,
dumping and reloading data, etc. I just:
$ psql -f schema/customer_functions.sql
DROP TRIGGER
DROP TRIGGER
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
CREATE TRIGGER
CREATE TRIGGER
... and any changes are applied.
--
Craig Ringer
--
Sent via pgsql-general mailing list (pgsql-general@xxxxxxxxxxxxxx)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general