Hi Dave, thanks for your feedback. >> 2 disk backends, each: Quad-Xeon X5550, 12G of RAM, 28T HW >> SATA-RAID6 sliced into 2T chunks by LVM2 and exported via tgt >> 1.0.0-2, Ubuntu 10.04 LTS, connected via Mellanox MHRH19B-XTR >> Infiniband + ISER to >> >> 1 frontend Octo-Xeon E5520, 12G of RAM, open-iscsi 2.0.871 >> initiator, Ubuntu 10.04 LTS. LMV2 stitches together the >> 2T-iSCSI-LUNs and provides a 10T test XFS filesystem > > Out of curiousity, why are you using such a complex storage > configuration? > > IMO, it is unneccessarily complex - you could easily do this (~30 > drives) with a single server with a couple of external SAS JBOD > arrays and SAS RAID controllers. That would give you the same > performance (or better), with many fewer points of failure (both > hardware and software), use less rack space, and probably be > significantly cheaper.... basically, our situation is this: we have to supply our astrophysicists (not just them, but they consume 95%) with large and ever-increasing amounts of disk space. Up to now we bought individual file servers whenever space was needed, which is an administrative nightmare as you can imagine. Hence we decided to come up with a more scalable solution that would grow with the space needed - and grow it will. We start off with 52T and can easily add additional disk units to the Infiniband switch. It is well possible we have overlooked an easier/cheaper solution, but what we have now is very flexible and has emerged from discussions we've had with several 'storage experts'. Do you have any particular/typical device in mind? I'd like to check it out nonetheless. thanks, -Christian _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs