Hi Mathew,
If you are sure that "/mnt/raid6-storage/storage/data/projects/MEOPAR/"
is the only directory with wrong accounting and its immediate sub directories have correct xattr values, Setting the dirty xattr and doing a stat after that should resolve the issue.If you are sure that "/mnt/raid6-storage/storage/
1) setxattr -n trusted.glusterfs.quota.dirty -v 0x3100 /mnt/raid6-storage/storage/2) stat /mnt/raid6-storage/storage/data/projects/MEOPAR/
Could you please share what kind of operations that happens on this directory, to help RCA the issue.
If you think this can be true elsewhere in filesystem as well,use the following script to identify the same.
1) https://github.com/gluster/glusterfs/blob/master/extras/quota/xattr_analysis.py
2) https://github.com/gluster/glusterfs/blob/master/extras/quota/log_accounting.sh
Regards,
Sanoj
On Mon, Aug 28, 2017 at 12:39 PM, Raghavendra Gowdappa <rgowdapp@xxxxxxxxxx> wrote:
+sanoj
> ______________________________
----- Original Message -----
> From: "Matthew B" <matthew.has.questions@gmail.com >
> To: gluster-devel@xxxxxxxxxxx
> Sent: Saturday, August 26, 2017 12:45:19 AM
> Subject: Quota Used Value Incorrect - Fix now or after upgrade
>
> Hello,
>
> I need some advice on fixing an issue with quota on my gluster volume. It's
> running version 3.7, distributed volume, with 7 nodes.
>
> # gluster --version
> glusterfs 3.7.13 built on Jul 8 2016 15:26:18
> Repository revision: git:// git.gluster.com/glusterfs.git
> Copyright (c) 2006-2011 Gluster Inc. < http://www.gluster.com >
> GlusterFS comes with ABSOLUTELY NO WARRANTY.
> You may redistribute copies of GlusterFS under the terms of the GNU General
> Public License.
>
> # gluster volume info storage
>
> Volume Name: storage
> Type: Distribute
> Volume ID: 6f95525a-94d7-4174-bac4-e1a18fe010a2
> Status: Started
> Number of Bricks: 7
> Transport-type: tcp
> Bricks:
> Brick1: 10.0.231.50:/mnt/raid6-storage/storage
> Brick2: 10.0.231.51:/mnt/raid6-storage/storage
> Brick3: 10.0.231.52:/mnt/raid6-storage/storage
> Brick4: 10.0.231.53:/mnt/raid6-storage/storage
> Brick5: 10.0.231.54:/mnt/raid6-storage/storage
> Brick6: 10.0.231.55:/mnt/raid6-storage/storage
> Brick7: 10.0.231.56:/mnt/raid6-storage/storage
> Options Reconfigured:
> changelog.changelog: on
> geo-replication.ignore-pid-check: on
> geo-replication.indexing: on
> nfs.disable: no
> performance.readdir-ahead: on
> features.quota: on
> features.inode-quota: on
> features.quota-deem-statfs: on
> features.read-only: off
>
> # df -h /storage/
> Filesystem Size Used Avail Use% Mounted on
> 10.0.231.50:/storage 255T 172T 83T 68% /storage
>
>
> I am planning to upgrade to 3.10 (or 3.12 when it's available) but I have a
> number of quotas configured, and one of them (below) has a very wrong "Used"
> value:
>
> # gluster volume quota storage list | egrep "MEOPAR "
> /data/projects/MEOPAR 8.5TB 80%(6.8TB) 16384.0PB 17.4TB No No
>
>
> I have confirmed the bad value appears in one of the bricks current xattrs,
> and it looks like the issue has been encountered previously on bricks 04,
> 03, and 06: (gluster07 does not have a trusted.glusterfs.quota.size.1 as it
> was recently added)
>
> $ ansible -i hosts gluster-servers[0:6] -u <user> --ask-pass -m shell -b
> --become-method=sudo --ask-become-pass -a "getfattr --absolute-names -m . -d
> -e hex /mnt/raid6-storage/storage/data/projects/MEOPAR | egrep
> '^trusted.glusterfs.quota.size'"
> SSH password:
> SUDO password[defaults to SSH password]:
>
> gluster02 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0x0000011ecfa56c00000000000005 cd6d000000000006d478
> trusted.glusterfs.quota.size.1= 0x0000010ad4a45200000000000001 2a0300000000000150fa
>
> gluster05 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0x00000033b8e92200000000000004 cde8000000000006b1a4
> trusted.glusterfs.quota.size.1= 0x0000010dca277c00000000000001 297d0000000000015005
>
> gluster01 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0x0000003d4d434800000000000005 7616000000000006afd2
> trusted.glusterfs.quota.size.1= 0x00000133fe211e00000000000005 d161000000000006cfd4
>
> gluster04 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0xffffff396f3e9400000000000004 d7ea0000000000068c62
> trusted.glusterfs.quota.size.1= 0x00000106e6724800000000000001 138f0000000000012fb2
>
> gluster03 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0xfffffd02acabf000000000000003 599000000000000643e2
> trusted.glusterfs.quota.size.1= 0x00000114e20f5e00000000000001 13b30000000000012fb2
>
> gluster06 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0xffffff0c98de4400000000000005 36e40000000000068cf2
> trusted.glusterfs.quota.size.1= 0x0000013532664e00000000000005 e73f000000000006cfd4
>
> gluster07 | SUCCESS | rc=0 >>
> trusted.glusterfs.quota.size=0xfffffa3d7c1ba60000000000000a 9ccb000000000005fd2f
>
> And reviewing the subdirectories of that folder on the impacted server you
> can see that none of the direct children have such incorrect values:
>
> [root@gluster07 ~]# getfattr --absolute-names -m . -d -e hex
> /mnt/raid6-storage/storage/data/projects/MEOPAR/*
> # file: /mnt/raid6-storage/storage/data/projects/MEOPAR/<dir1 >
> ...
> trusted.glusterfs.quota.7209b677-f4b9-4d82-a382- 0733620e6929.contri= 0x000000fb68418200000000000000 74730000000000000dae
> trusted.glusterfs.quota.dirty=0x3000
> trusted.glusterfs.quota.size=0x000000fb68418200000000000000 74730000000000000dae
>
> # file: /mnt/raid6-storage/storage/data/projects/MEOPAR/<dir2 >
> ...
> trusted.glusterfs.quota.7209b677-f4b9-4d82-a382- 0733620e6929.contri= 0x0000000416d5f400000000000000 0baa0000000000000441
> trusted.glusterfs.quota.dirty=0x3000
> trusted.glusterfs.quota.limit-set= 0x0000010000000000ffffffffffff ffff
> trusted.glusterfs.quota.size=0x0000000416d5f400000000000000 0baa0000000000000441
>
> # file: /mnt/raid6-storage/storage/data/projects/MEOPAR/<dir3>
> ...
> trusted.glusterfs.quota.7209b677-f4b9-4d82-a382- 0733620e6929.contri= 0x000000110f2c4e00000000000002 a76a000000000006ad7d
> trusted.glusterfs.quota.dirty=0x3000
> trusted.glusterfs.quota.limit-set= 0x0000020000000000ffffffffffff ffff
> trusted.glusterfs.quota.size=0x000000110f2c4e00000000000002 a76a000000000006ad7d
>
>
> Can I fix this on the current version of gluster (3.7) on just the one brick
> before I upgrade? Or would I be better off upgrading to 3.10 and trying to
> fix it then?
>
> I have reviewed information here:
>
> http://lists.gluster.org/pipermail/gluster-devel/2016- February/048282.html
> http://lists.gluster.org/pipermail/gluster-users.old/ 2016-September/028365.html
>
> It seems like since I am on Gluster 3.7 I can disable quotas and re-enable
> and everything will get recalculated and increment the index on the
> quota.size xattr. But with the size of the volume that will take a very long
> time.
>
> Could I simply mark the impacted directly as dirty on gluster07? Or update
> the xattr directly as the sum of the size of dir1, 2, and 3?
>
> Thanks,
> -Matthew
>
_________________
> Gluster-devel mailing list
> Gluster-devel@xxxxxxxxxxx
> http://lists.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________ Gluster-devel mailing list Gluster-devel@xxxxxxxxxxx http://lists.gluster.org/mailman/listinfo/gluster-devel