On Wed, 2009-01-07 at 21:18 +0000, Nathan Rixham wrote: > Richard Heyes wrote: > >> That's for a single table. > >> > > > > I've not come across many databases where 20-50 tables have 10 million > > rows each. And with a table of that size, then I might be coerced into > > thinking about the storage requirements a little more. Maybe. > > > > > >> Now add another 20 to 50 tables depending on > >> the database. If you want to throw away money go ahead, but I don't know > >> too many clients that want to waste 10 gigs of mostly padded space. > >> > > > > I don't know of many clients who care as long as it is performant and > > cost effective. Wasting 10 Gigs is not a great deal when you have a > > drive measured in the hundreds of Gigs. > > > > > until you have to dump it, zip it, ssh it over to another box and then > import it back in > Not just that, but aren't there greater overheads if the database is physically larger in size? I assume that char might be a bit quicker to work with than varchar, but I am pretty certain that using a fulltext index on a text field is ridiculously slow compared to the former two. Ash www.ashleysheridan.co.uk -- PHP General Mailing List (http://www.php.net/) To unsubscribe, visit: http://www.php.net/unsub.php