Alex Hayward wrote:
IO bound doesn't imply IO bandwidth bound. 14 disks doing a 1ms seek followed by an 8k read over and over again is a bit over 100MB/s. Adding in write activity would make a difference, too, since it'd have to go to at least two disks. There are presumably hot spares, too.
Very true - if your workload is primarily random, ~100Mb/s may be enough bandwidth.
I still wouldn't really want to be limited to 200MB/s if I expected to use a full set of 14 disks for active database data where utmost performance really matters and where there may be some sequential scans going on, though.
Yeah - thats the rub, Data mining, bulk loads, batch updates, backups (restores....) often use significant bandwidth.
Though, of course, these won't do many of the things you can do with a SAN - like connect several computers, or split a single array in to two pieces and have two computers access them as if they were separate drives, or remotely shut down one database machine and then start up another using the same disks and data. The number of IO operations per second they can do is likely to be important, too...possibly more important.
SAN flexibility is nice (when it works as advertised), the cost and performance however, are the main detractors. On that note I don't recall IO/s being anything special on most SAN gear I've seen (this could have changed for later products I guess).
There's 4GB FC, and so presumably 4GB SANs, but that's still not vast bandwidth. Using multiple FC ports is the other obvious way to do it with a SAN. I haven't looked, but I suspect you'll need quite a budget to get that...
Yes - the last place I worked were looking at doing this ('multiple attachment' was the buzz word I think) - I recall it needed special (read extra expensive) switches and particular cards...
Cheers Mark