On 2 April 2018 at 14:48, Andreas Davour <ante@xxxxxxxxxxxx> wrote:
Hi
I've found something that works so weird I'm certain I have missed how gluster is supposed to be used, but I can not figure out how. This is my scenario.
I have a volume, created from 16 nodes, each with a brick of the same size. The total of that volume thus is in the Terabyte scale. It's a distributed volume with a replica count of 2.
The filesystem when mounted on the clients is not even close to getting full, as displayed by 'df'.
But, when one of my users try to copy a file from another network storage to the gluster volume, he gets a 'filesystem full' error. What happened? I looked at the bricks and figured out that one big file had ended up on a brick that was half full or so, and the big file did not fit in the space that was left on that brick.
Hi,
This is working as expected. As files are not split up (unless you are using shards) the size of the file is restricted by the size of the individual bricks.
This resulted in the absurd situation that the user could see a filesystem with massive amount of free space, but still got a filesystem full error.
This is misleading, yes. Do you have any numbers for the size of the file vs the size of the brick?
The only workaround I've found is to set some values for max free size, but this is a very wonky solution, as those values will fluctuate as the filesystem is used.
Surely I'm missing something obvious? A filesystem that has much free space should not randomly give an error like that?
Interested to hear some best practices for this kind of stuff.
/andreas
--
"economics is a pseudoscience; the astrology of our time"
Kim Stanley Robinson
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://lists.gluster.org/mailman/listinfo/gluster-users
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-users