On 17/06/2013, at 4:01 AM, James wrote: <snip> > I have to jump in here and add that I'm with you for the drivers > aspect. I had a lot of problems with the 10gE drivers when getting > gluster going. I haven't tested recently, but it's a huge worry when > buying hardware. Even RedHat had a lot of trouble confirming if > certain chips would work! That's good to know about. In my personal home dev/test lab here I'm just using Mellanox DDR ConnectX cards ($39 on eBay! :>) and running things with either IPoIB or in RDMA mode. I did try switching the cards into 10GbE mode (worked fine), but don't really see the point of running these cards at half speed and worse (10GbE) in a home lab. :) > The other issue I have is with hardware RAID. I'm not sure if folks > are using that with gluster or if they're using software RAID, but the > closed source nature and crappy proprietary tools annoys all the > devops guys I know. What are you all doing for your gluster setups? Is > there some magical RAID controller that has Free tools, or are people > using mdadm, or are people just unhappy or ? Before joining Red Hat I was using Areca hardware. But Areca (the company) was weird/dishonest when I tried to RMA a card that went bad. So, I advise people to keep away from that crowd. Haven't tried any others in depth since. :/ > PS: FWIW I wrote a puppet module to manage LSI RAID. It drove me crazy > using their tool on some supermicro hardware I had. If anyone shows > interest, I can post the code. That corresponds to this blog post doesn't it? :) https://ttboj.wordpress.com/2013/06/17/puppet-lsi-hardware-raid-module/ Regards and best wishes, Justin Clift -- Open Source and Standards @ Red Hat twitter.com/realjustinclift