Hello All- After posting to a previous thread about this issue (http://gluster.org/pipermail/gluster-users/2011-April/007157.html) I decided to start a new thread, mainly because I think I have found a problem relating to this setup. Our servers vary in size quite a lot, so some of the bricks in one particular volume are 100% full. This has not caused us any problems until now, because new files are always created on larger bricks where there is still space. However, yesterday a user complained that he was getting "device full" errors even though df reported several hundred GB free in the volume. The problem turned out to be caused by over-writing pre-existing files that were stored on one or more full bricks. Deleting the old files before creating them again cured the problem, because the new files were then created on larger bricks. Is this a known problem when using distributed or distributed/replicated volumes with non uniform backend sizes, and is there any way to avoid it? Lifting some comments and questions from the other thread... From this posting: http://gluster.org/pipermail/gluster-users/2011-March/007103.html > I see that > your backend sizes are different... Its preferred to keep them uniform. And from this posting: http://gluster.org/pipermail/gluster-users/2011-March/007104.html > try to keep the backend uniform to avoid any possible issues which may > arise later. Please could someone comment on the "possible issues that might arise" with with a setup involving non-uniform backend brick sizes. All comments and suggestions would be much appreciated. -Dan. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://gluster.org/pipermail/gluster-users/attachments/20110501/5c4c3df6/attachment.htm>