We have a glusterfs with 8 bricks on 4 servers. The filesystem is at ~73% full, but today I got a 'device full error' from a failed batch job and sure enough, one brick had filled. I was able to probe the brick, find some duplicate files and delete enough to allow the system to continue, but obviously this will be a continuing problem. I suspect that this is due to some very large files placed early in the fs lifetime. I understand that gluster does not allow gluster-rebalancing in the same config (ie, without adding a brick), but would moving some offending files to another fs and then copying them back tend to rebalance the FS by distributing the copied files more evenly? Or is the only solution to add (or clear) a brick and then rebalance across it when added back in? hjm -- -- Harry Mangalam - Research Computing, OIT, Rm 225 MSTB, UC Irvine [m/c 2225] / 92697 Google Voice Multiplexer: (949) 478-4487 415 South Circle View Dr, Irvine, CA, 92697 [shipping] MSTB Lat/Long: (33.642025,-117.844414) (paste into Google Maps) -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130611/fbdd556c/attachment.html>