> > I am wondering, if it's effective to use text arrays to store > > multilanguage information. We solved it like this: http://savannah.gnu.org/cgi-bin/viewcvs/*checkout*/gnumed/gnumed/gnumed/server/sql/gmI18N.sql?rev=1.15 - i18n_curr_lang holds the (ISO) language string per user - i18n_keys holds the strings to be translated - i18n_translations holds the translations - function i18n() inserts a text value into i18n_keys thus marking it for translation, this we use during insertion of data in the "primary language" (think gettext, think English) - function _() returns a translated text value (or it's primary language equivalent), we use that in queries and view definitions - v_missing_translations is a view of, well, ... Then here: http://savannah.gnu.org/cgi-bin/viewcvs/*checkout*/gnumed/gnumed/gnumed/server/locale/dump-missing-db_translations.py?rev=1.3 we've got a Python tool that will connect to your database and create schema files ready for re-insertion via psql after adding the missing translations. Kind of like a gettext po file. Karsten -- GPG key ID E4071346 @ wwwkeys.pgp.net E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346 ---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match