I'm hoping to get some advice on a design question I'm grappling with. I have a database now that in many respects may be regarded as an collection of a few hundred much smaller "parallel databases", all having the same schema. What I mean by this is that, as far as the intended use of this particular system there are no meaningful queries whose results would include information from more than one of these parallel component databases. Furthermore, one could delete all the records of any one of these parallel components without affecting the referential integrity of the rest of the database. Therefore, both for performance and maintenance reasons, the idea of splitting this database into its components looks very attractive. This would result in a system with hundreds of small databases (and in the future possibly reaching into the low thousands). I don't have experience with such a situation, and I'm wondering if there are issues I should be concerned about. Alternatively, maybe there are techniques to achieve the benefits of this split without actually carrying it out. The two benefits I see are in the areas of performance and maintenance. As for performance, I assume (naively, I'm sure) that searches will be faster in the individual component databases, simply because it's a search among fewer pieces of information. And for maintenance, I think the split would make the system more robust during database updates, because only a small component would be updated at a time, and the rest of the system would completely insulated from this. I'd very much appreciate your thoughts on these issues. I imagine I'm not the first person to confront this kind of design choice. Does it have a standard name that I could use in a Google search? TIA! kj ---------------------------(end of broadcast)--------------------------- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to majordomo@xxxxxxxxxxxxxx so that your message can get through to the mailing list cleanly