node1:~# gluster volume quota homedir list |grep PB
/storage/home1 3.0GB 90% 16384.0PB 3.0GB
/storage/home2 1.0GB 90% 16384.0PB 1.0GB
DU on actual directory
node1:~# du -hs /data/storage/home1/
368K /data/storage/home1/
node1:~# du -hs /data/storage/home2
546K /data/storage/home2
I am using replicated volume (replica count 2) and mounted volume as nfs. Individual bricks are formatted as ext4. If I remove quotas and restart replicated volume and then re-add quotas then gluster quota shows correct value (instead of 16384.0 PB) but once file transfer starts and files are added/deleted again these directories start showing incorrect values.
Just wanted to check if this is a bug in gluster quota and/or if there is any way to mitigate this?
I do see a similar bug but for 3.4
https://bugzilla.redhat.com/show_bug.cgi?id=1061068
Thanks
_______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://www.gluster.org/mailman/listinfo/gluster-users