On Sun, Oct 24, 2010 at 08:22:46PM +0200, Michael Monnerie wrote: > On Samstag, 23. Oktober 2010 Angelo McComis wrote: > > They quoted having 10+TB databases running OLTP on EXT3 with > > 4-5GB/sec sustained throughput (not XFS). > > Which servers and storage are these? This is nothing you can do with > "normal" storages. Using 8Gb/s Fibre Channel gives 1GB/s, if you can do > full speed I/O. So you'd need at least 5 parallel Fibre Channel storages > running without any overhead. Also, a single server can't do that high > rates, so there must be several front-end servers. That again means > their database must be especially organised for that type of load > (shared nothing or so). Have a look at IBM's TPC-C submission here on RHEL5.2: http://www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=108081902 That's got 8x4GB FC connections to 40 storage arrays with 1920 disks behind them. It uses 80x 24 disk raid0 luns, with each lun split into 12 data partitions on the outer edge of each lun. That gives 960 data partitions for the benchmark. Now, this result uses raw devices for this specific benchmark, but it could easily use files in ext3 filesystems. With 960 ext3 filesystems, you could easily max out the 3.2GB/s of IO that sucker has as it is <4MB/s per filesystem. So I'm pretty sure IBM are not quoting a single filesystem throuhgput result. While you could get that sort of result form a single filesytsem with XFS, I think it's an order of magnitude higher than a single ext3 filesystem can acheive.... Cheers, Dave. -- Dave Chinner david@xxxxxxxxxxxxx _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs