Last night I ran into problem where some of the bricks in my volume completed filled up to 100%, while others were still below 90%. The bricks are 25TB each, so 10% would be about 2.5TB. The files we were writing were not more than 100GB each so even a number of concurrent file writes were not capable of filling up 10% in a very short amount of time. We have 40 bricks in the volume. Looks like the default min-free-disk setting (which I did not set) is not observed. After trying to set it to 10%, and also 500GB on a different try, I tried creating 1000 new empty files on a client (after remounting the volume), and it still writes to the bricks with less than 10% or less than 500GB free. Is this a feature available in later released only? I am using Stable 3.3.1 . Thanks, ... ling On 01/28/2013 02:37 PM, Jeff Darcy wrote: > On 01/28/2013 05:19 PM, Ling Ho wrote: >> How "full" does it has to be before new files start getting written into >> the other bricks? > By default, min-free-disk is set to 10% and min-free-inodes is set to 5%. > >> In my recent experience, I added a new brick to an existing volume while >> one of the existing 4 bricks was close to full. And yet I constantly get >> out of space error when trying to write new files. Full rebalancing was >> also such as slow process that it cannot keep up with new files I need >> to write to the volume. > Yes, rebalancing is slow. I have work in progress to make it less slow > by doing the fix-layout part minimally instead of for every single > directory in the entire volume, and to skip the migrate-data part > entirely in favor of just letting new data get placed onto new bricks. > I don't know when those will make it into a release, but if you're > interested you can read about the work in progress here. > > http://hekafs.org/index.php/2012/03/spreading-the-load/ > http://hekafs.org/index.php/2012/05/the-quest-for-balance/ > http://review.gluster.org/#change,3573 > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.org/mailman/listinfo/gluster-users