Hi Dan. I believe that you would have to run this command: gluster volume rebalance <volname> start at which point, Gluster will try to balance the files amongst the storage nodes. Whether or not it will accommodate non-uniform bricks I don't know for sure (since mine are uniform), but I believe that it will look at actual space available and try to make intelligent decisions on where to place files. Please check with the devs before implementing my suggestion, however - don't want to cause any harm since I'm unsure. James Burnash, Unix Engineering From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Dan Bretherton Sent: Sunday, May 01, 2011 9:01 AM To: gluster-users Subject: [SPAM?] Non-uniform backend brick sizes Importance: Low Hello All- After posting to a previous thread about this issue (http://gluster.org/pipermail/gluster-users/2011-April/007157.html) I decided to start a new thread, mainly because I think I have found a problem relating to this setup. Our servers vary in size quite a lot, so some of the bricks in one particular volume are 100% full. This has not caused us any problems until now, because new files are always created on larger bricks where there is still space. However, yesterday a user complained that he was getting "device full" errors even though df reported several hundred GB free in the volume. The problem turned out to be caused by over-writing pre-existing files that were stored on one or more full bricks. Deleting the old files before creating them again cured the problem, because the new files were then created on larger bricks. Is this a known problem when using distributed or distributed/replicated volumes with non uniform backend sizes, and is there any way to avoid it? Lifting some comments and questions from the other thread...