One problem right away is that 3.3 doesn't support rdma. I know it still builds the rdma packages but rdma isn't a supported connection method for 3.3. It was set aside so the 33. release could make it on time. As I understand it, 3.1 was/is suppposed to be the first 3.x version to fully support it. I'd upgrade and test again. That or downgrade to 3.2.7 which does in fact support it right now. I'm not even sure how you got things mounted using the "rdma" semantics with 3.3. My experiences so far were sort of disappointing until I found out a few key items about GlusterFS which I'd taken for granted. 1. Stripes are not what you might think. The I/O for a stripe does _not_ fan out as in a raid card. It's an unfortunate use of the term only describing and allowing you to store files larger than the max size of a single brick. 2. I/O is done in sync mode so cache coherency isn't an issue and to ensure the integrity of the data written. 3. The performance of a distributed volume far exceeds that of a stripe for my use. Again, depends on the size of the bricks. These are things which effect my experience and now that I at least _think_ I understand them my results make much more sense to me. Generally my throughput maxes out around 800-900MB/s which is the limit of my disk storage right now. As a test I created as large of a volume as I could using ramdisks. I did this to see just how much of a limiter my disks actually are, and I was very surprised to see the speed NOT increase across the file system using an rdma target on version 3.2.6. The local I/O reached 1.9GB/s. So, in addition to my spindle based limit, I do not believe I am using any more than a single lane of my IB cards (which are FDR ( 56Gb) Mellanox). Using the ib test utilities I can indeed max the connection at 6GB/s, which is really amazing to see but so far I just can't seem to get GlusterFS to make use of the total B/W available. Also, you might try putting your cads into "Connected Mode" and increasing your MTU to 64k. Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121018/08e94510/attachment.html>