Re: Gluster quota showing wrong space utilization (showing 16384.0 PB)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Omar,

We will run this test-case in our lab machines.
Meanwhile can you provide the brick logs and execute the attached script for each brick-path, and provide the output from the script

Thanks,
Vijay


On Tuesday 03 February 2015 08:06 AM, Omkar Kulkarni wrote:
Hi Vijay,

I was recreate the issue. 
1) Create a replicated volume with 2 CentOS 6.5 machines 

i)gluster volume create homedir replica 2 transport tcp node1:/brick1/homedir node2:/brick1/homedir


gluster  volume set homedir network.ping-timeout 10

gluster  volume set homedir performance.cache-size 1GB

gluster  volume set homedir nfs.rpc-auth-allow 10.99.30.20,10.99.30.21

gluster  volume set homedir auth.allow 10.99.30.20,10.99.30.21

gluster  volume set homedir features.quota on

gluster  volume set homedir features.quota-timeout 0

gluster  volume set homedir features.quota-deem-statfs on


ii) setup quotas on around 900 folders


node1:~# gluster volume quota homedirlist |wc -l

903


iii) Created a rsync to create/delete files with different names  in 20 different folders and ran every 2 minutes

iv) in Around 2 days some directories showed PB in files used



On Fri, Jan 23, 2015 at 1:01 AM, Vijaikumar M <vmallika@xxxxxxxxxx> wrote:
Hi Omar,

If the issue is happening consistently, can we get a re-creatable test-case for this same?

Thanks,
Vijay


On Wednesday 21 January 2015 02:47 AM, Omkar Kulkarni wrote:
Hi Guys,

I am using gluster 3.5.2 and for couple of directories I am getting used space as 16384.0 PB

 node1:~# gluster volume quota homedir list |grep PB

/storage/home1                          3.0GB       90%   16384.0PB   3.0GB

/storage/home2                             1.0GB       90%   16384.0PB   1.0GB



DU on actual directory


node1:~#  du -hs /data/storage/home1/

368K    /data/storage/home1/


node1:~#  du -hs /data/storage/home2

546K    /data/storage/home2


I am using replicated volume (replica count 2) and mounted volume as nfs. Individual bricks are formatted as ext4. If I remove quotas and restart replicated volume and then re-add quotas then gluster quota shows correct value (instead of 16384.0 PB) but once file transfer starts and files are added/deleted again these directories start showing incorrect values. 


Just wanted to check if this is a bug in gluster quota and/or if there is any way to mitigate this?



I do see a similar bug but for 3.4

https://bugzilla.redhat.com/show_bug.cgi?id=1061068


Thanks






 

 



_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users



Attachment: quota-verify.gz
Description: application/gzip

_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users

[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux