On 03/03/2011 10:49 AM, Burnash, James wrote: > Sure, Joe - I will get that to you. I do have dstat on the > machines. > > What I'm really interested in, however is what the actual number > following "layout" represents? Inodes? Blocks? Files? Any idea? Probably a hash key index that needed updating. Gluster uses hashes to compute the physical layout (where the files live) with respect to bricks. If the calculation was checked, and found to be in error (this is a guess) or in need of updating, I am betting they would update it during this rebalance. > In this case, there are two HP storage servers running external > enclosures with a 1.5Gb link, filled with 70 SATA 2TB drives > configured as RAID 50 and running over a active / passive bonded 10Gb > network connection. All servers are local to the same switches - > single hop between them. Ok ... 1.5Gb link? So this is like 1 lane of SAS 1? I'd do ascii-art, but I can't guarantee it would work well ... simple line art will do ... [disks] --- (1.5Gb/s single link) --- [HP storage server] || (10GbE active/passive) [disks] --- (1.5Gb/s single link) --- [HP storage server] With the disks being RAID50 within the array. > Each storage server hosts 10 bricks of 12TB each. So you have 20 bricks total (set up as distribute+replicate I am guessing). I'd be curious as to how (if you did this), you can guarantee mirrors don't show up as being on the same physical unit. Are the files small/large (under 32kB / over 1MB) on the system on average? > Does that help? It would be exceptionally cool if there was a > calculator to help with all this ... I could do the math if I had to > , but it's not ... my first language :-) Yeah ... I am guessing you are running out of some resource somewhere (probably maxing out on IOPs on reads, or possibly the network) -- Joseph Landman, Ph.D Founder and CEO Scalable Informatics Inc. email: landman at scalableinformatics.com web : http://scalableinformatics.com http://scalableinformatics.com/sicluster phone: +1 734 786 8423 x121 fax : +1 866 888 3112 cell : +1 734 612 4615