Antonio Bassinger wrote: > Thanks to All who suggested solutions or pointed errors in my existing > approach. > Most seem to suggest having 1 database and 1-2 tables. So let me confirm: > > 1 table with million records are ok. But what of the size of the table? > > 10,000 * 10 MB = 100 GB! put your files on the filesystem store data about the files (including where each one is saved) in the DB. > > If the upload limit is to be notched up 100 times - typical of public mail > servers, a table would expand to 10 TB. in which case your in the territory of clusters, distributed systems, really big iron, LOTS of raid, etc, etc - a bit beyond the scope of this list (granted there are a few people here with the skills and experience to tackle such architectures but generally they get paid big bucks to dish out that kind of solution :-) then again you never know. there is a girl called Michele (german?) who posts here now and again who seems to work quite a bit with very large databases and massive storage - maybe she reads this and has some ideas/tips? > > Someone suggested : > > The one-database-for all method increases risk that an SQL error will > "leak" information from one client to another. very vague. make your SQL rock solid and never print out any errors returned by mysql (and obviously display_errors must be off in your production system) - just give the user some friendly generic error msgs if something goes wrong. > > But with 1 table and a million records, what would be the chances of this > "leak"? I don't think the probability of a "leak" has much, if anything, to with the number of records in the table - it a down to the robustness/quality of the code written to interact with the DB and the user. > > My idea is, > > For every 100 users, make a new database. That is 100 tables, each of max. > 10MB * 100 = 1GB. > > For the 101th user, make a new database. So for 10000 users -> 100 > databases. > > 100 databases and 100 tables don't look bad to me. What say? I'd say it's an arbitrary way of splitting up the data that is heavily unnormalized - and also poses a maintance nightmare when updating the DB schema and or performing DB 'health checks' and/or repairs.... stick to 1 DB, 2+ tables until/unless it becomes clear that a single data-source is a performance or storage problem. > > Thanks > Antonio > > On 6/11/06, Anthony Ettinger <aettinger@xxxxxxxxxxxxxx> wrote: >> >> On 6/9/06, Antonio Bassinger <antonio.bassinger@xxxxxxxxx> wrote: >> > Hi gang, >> > >> > Situation: >> > >> > I've a HTTP server. I intend to run a file upload service. There could >> be up >> > to 10000 subscribers. Each saving files up to 10 MB. >> > >> > I made a proof-of-concept service using PHP & MySQL, where there is a >> single >> > database, but many tables - a unique table for each subscriber. But I >> > realize that I may land in trouble with such a huge database. Would it >> be >> > better to have a separate database for each subscriber? >> > >> > Which approach is better, many tables in 1 database, or many databases >> with >> > 1 or max 2 tables? >> > >> > Kindly suggest with pros and cons of each. >> >> you might want to consider storing the files outside of the database >> as well, and just a pointer to it's path in the table. >> >> with respect to table vs. databases per user, neither. >> >> >> -- >> Anthony Ettinger >> Signature: http://chovy.dyndns.org/hcard.html >> > -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php