I'm building a database containing key parameters for ~500,000 data files. The design I found logical is Two tables for each file: 1) Larger table with detailed key parameters (10-15 columns, ~1000 rows), call it large_table 2) Small table with file summary (~30 columns, 1 row), call it small_table A major purpose is to gain the ability to identify all data in the original archive (~3 terabyte) satisfying some criteria that can be specified in the database queries. But I find the query options surprisingly limited (or difficult - sure, things can be done if you write enough code...) Here is what I would like to be possible: SELECT <large_table columns> FROM <regular expression> WHERE <condition on large_table> IF <condition on corresponding small_table>; In words: For each of many large_tables, check corresponding small table and only process large_table if some condition is met. 2 problems: 1) Can't use regular expression to specify tables. Wouldn't that be nice? Is it possible to specify a lot of tables without having to write custom functions in e.g. plsql? 2) No IF statement. How do I issue commands (or redesign database) so I can check the file summary before diving into the large table? This ought to be simple... Any input will be highly appreciated, thanks in advance! Poul Jensen ---------------------------(end of broadcast)--------------------------- TIP 2: Don't 'kill -9' the postmaster