Re: incorrect usage value on a directory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Sergei,

You can set marker "dirty" xattr using key trusted.glusterfs.quota.dirty. You have two choices:

1. Setting through a gluster mount. This will set key on _all_ bricks.

[root@unused personal]# gluster volume info
No volumes present
[root@unused personal]# rm -rf /home/export/ptop-1 &&  gluster volume create ptop-1 booradley:/home/export/ptop-1/
volume create: ptop-1: success: please start the volume to access data
[root@unused personal]# gluster volume start ptop-1
volume start: ptop-1: success


[root@unused personal]# mount -t glusterfs booradley:/ptop-1 /mnt/glusterfs
[root@unused personal]# cd /mnt/glusterfs
[root@unused glusterfs]# ls
[root@unused glusterfs]# mkdir dir
[root@unused glusterfs]# ls
dir
[root@unused glusterfs]# setfattr -n trusted.glusterfs.quota.dirty -v 1 dir
[root@unused glusterfs]# getfattr -e hex -m . -d dir
# file: dir
security.selinux=0x73797374656d5f753a6f626a6563745f723a6675736566735f743a733000

[root@unused glusterfs]# getfattr -e hex -m . -d /home/export/ptop-1/dir/
getfattr: Removing leading '/' from absolute path names
# file: home/export/ptop-1/dir/
security.selinux=0x756e636f6e66696e65645f753a6f626a6563745f723a686f6d655f726f6f745f743a733000
trusted.gfid=0xbea41d7780e4445e93dc379b0a43bb7a
trusted.glusterfs.dht=0x000000010000000000000000ffffffff
trusted.glusterfs.quota.dirty=0x31

2. If you find usage wrong only on an individual brick, you can just set the xattr on the backend directly. For eg., in the volume above, we can also do,
setfattr -n trusted.glusterfs.quota.dirty -v 1 /home/export/ptop-1/dir

regards,
Raghavendra

----- Original Message -----
> From: "Manikandan Selvaganesh" <mselvaga@xxxxxxxxxx>
> To: "Sergei Gerasenko" <gerases@xxxxxxxxx>
> Cc: "Sergei Gerasenko" <sgerasenko74@xxxxxxxxx>, "gluster-users" <gluster-users@xxxxxxxxxxx>
> Sent: Tuesday, August 30, 2016 10:57:33 PM
> Subject: Re:  incorrect usage value on a directory
> 
> Hi Sergei,
> 
> Apologies for the delay. I am extremely sorry, I was struck on something
> important
> It's great that you figured out the solution.
> 
> Whenever you set a dirty flag as mentioned in the previous thread, the quota
> values will be recalcualted.
> Yep, as you mentioned there are lot of changes that has gone in from 3.7. We
> have
> introduced Inode-quota feature in 3.7, then we have implemented the Quota
> versioning
> in 3.7.5 and then enhance quota enable/disable feature in 3.7.12. So a lot of
> code changes
> has been done.
> 
> In case would you like to know more, you can refer our specs[1].
> 
> [1] https://github.com/gluster/glusterfs-specs
> 
> On Tue, Aug 30, 2016 at 9:27 PM, Sergei Gerasenko < gerases@xxxxxxxxx >
> wrote:
> 
> 
> 
> The problem must have started because of an upgrade to 3.7.12 from an older
> version. Not sure exactly how.
> 
> 
> 
> 
> On Aug 30, 2016, at 10:44 AM, Sergei Gerasenko < gerases@xxxxxxxxx > wrote:
> 
> It seems that it did the trick. The usage is being recalculated. I’m glad to
> be posting a solution to the original problem on this thread. It’s so
> frequent that threads contain only incomplete or partially complete
> solutions.
> 
> Thanks,
> Sergei
> 
> 
> 
> 
> On Aug 29, 2016, at 3:41 PM, Sergei Gerasenko < sgerasenko74@xxxxxxxxx >
> wrote:
> 
> I found an informative thread on a similar problem:
> 
> http://www.spinics.net/lists/gluster-devel/msg18400.html
> 
> According to the thread, it seems that the solution is to disable the quota,
> which will clear the relevant xattrs and then re-enable the quota which
> should force a recalc. I will try this tomorrow.
> 
> On Thu, Aug 11, 2016 at 9:31 AM, Sergei Gerasenko < gerases@xxxxxxxxx >
> wrote:
> 
> 
> 
> Hi Selvaganesh,
> 
> Thanks so much for your help. I didn’t have that option on probably because I
> originally had a lower version of cluster and then upgraded. I turned the
> option on just now.
> 
> The usage is still off. Should I wait a certain time?
> 
> Thanks,
> Sergei
> 
> 
> 
> 
> On Aug 9, 2016, at 7:26 AM, Manikandan Selvaganesh < mselvaga@xxxxxxxxxx >
> wrote:
> 
> Hi Sergei,
> 
> When quota is enabled, quota-deem-statfs should be set to ON(By default with
> the recent versions). But apparently
> from your 'gluster v info' output, it is like quota-deem-statfs is not on.
> 
> Could you please check and confirm the same on
> /var/lib/glusterd/vols/<VOLNAME>/info. If you do not find an option
> 'features.quota-deem-statfs=on', then this feature is turned off. Did you
> turn off this one? You could turn it on by doing this
> 'gluster volume set <VOLNAME> quota-deem-statfs on'.
> 
> To know more about this feature, please refer here[1]
> 
> [1]
> https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Directory%20Quota/
> 
> 
> On Tue, Aug 9, 2016 at 5:43 PM, Sergei Gerasenko < gerases@xxxxxxxxx > wrote:
> 
> 
> 
> Hi ,
> 
> The gluster version is 3.7.12. Here’s the output of `gluster info`:
> 
> Volume Name: ftp_volume
> Type: Distributed-Replicate
> Volume ID: SOME_VOLUME_ID
> Status: Started
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: host03:/data/ftp_gluster_brick
> Brick2: host04:/data/ftp_gluster_brick
> Brick3: host05:/data/ftp_gluster_brick
> Brick4: host06:/data/ftp_gluster_brick
> Brick5: host07:/data/ftp_gluster_brick
> Brick6: host08:/data/ftp_gluster_brick
> Options Reconfigured:
> features.quota: on
> 
> Thanks for the reply!! I thought nobody would reply at this point :)
> 
> Sergei
> 
> 
> 
> 
> On Aug 9, 2016, at 6:03 AM, Manikandan Selvaganesh < mselvaga@xxxxxxxxxx >
> wrote:
> 
> Hi,
> 
> Sorry, I missed the mail. May I know which version of gluster you are using
> and please paste the output of
> gluster v info?
> 
> On Sat, Aug 6, 2016 at 8:19 AM, Sergei Gerasenko < gerases@xxxxxxxxx > wrote:
> 
> 
> 
> Hi,
> 
> I'm playing with quotas and the quota list command on one of the directories
> claims it uses 3T, whereas the du command says only 512G is used.
> 
> Anything I can do to force a re-calc, re-crawl, etc?
> 
> Thanks,
> Sergei
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> --
> Regards,
> Manikandan Selvaganesh.
> 
> 
> 
> 
> --
> Regards,
> Manikandan Selvaganesh.
> 
> 
> 
> 
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users
> 
> 
> 
> --
> Regards,
> Manikandan Selvaganesh.
> 
> _______________________________________________
> Gluster-users mailing list
> Gluster-users@xxxxxxxxxxx
> http://www.gluster.org/mailman/listinfo/gluster-users
_______________________________________________
Gluster-users mailing list
Gluster-users@xxxxxxxxxxx
http://www.gluster.org/mailman/listinfo/gluster-users




[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux