David Pratt <fairwinds@xxxxxxxxxxx> writes: > I just want to get this right because it will be an important part of what I am > preparing. Sorry for the really long message but I don't know if it would make > any sense if I did not fully explain what i am wanting to do. I am not french > so excuse my sample translations... FWIW I started with a design much like this. I threw it out when I started having to actually use it and found it just entirely unworkable. It was bad enough that every single query had to do a subquery for every single text field but the first time I actually had to *load* the data I realized just how much work every single insert, delete, and update was going to be... I ended up storing every text field as a text[] and assigning each language an index into the array. This only works because in my application everything will have precisely the same (small) set of languages available. If you have a large variety of languages and each string will be available in a varying subset then this model might not work as well. It did require a bit of extra work handling arrays since my language driver doesn't do handle them directly. I can't make a principled argument for this being the "right" model but it's working well in practice for me and I can't see the fully normalized model ever working out well. One thing that worries me is that neither the array feature set nor the i18n feature set is stable yet and future changes might break my code. But I think it'll be workable. -- greg ---------------------------(end of broadcast)--------------------------- TIP 8: explain analyze is your friend