Re: Quota problems with dispersed volumes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 10/29/2014 04:00 PM, Raghavendra Gowdappa wrote:
----- Original Message -----
It seems that there are some other xattrs visible from client side. I've
identified 'trusted.glusterfs.quota.*.contri'. Are there any other
xattrs that I should handle on the client side ?

this is an internal xattr which only marker (disk usage accounting xlator) uses. The applications running on glusterfs shouldn't be seeing this. If you are seeing this xattr from mount, we should filter this xattr from being listed (at fuse-bridge and gfapi).

I see it at the ec xlator. I'm not sure if it's filtered out later.

The problem is that sometimes I get different values from different bricks (probably while it's being modified), and this is detected as an inconsistency. I'll just ignore this attribute.



It seems that there's also a 'trusted.glusterfs.quota.dirty'

This is again an internal xattr. You should not worry about handling this. This also needs to be filtered from being displayed to application.

I haven't seen it in ec, but I'll ignore it just in case...


and
'trusted.glusterfs.quota.limit-set'.

This should be visible from mount point, as this xattr holds the value of quota limit set on that inode. You can handle this in disperse xlator by picking the value from any of its children.


How I should handle visible xattrs in ec xlator if they have different
values in each brick ?

trusted.glusterfs.quota.size is handled by choosing the maximum value.

This depends on how ec is handling the files/directories and the meaning of xattr. For eg., trusted.glusterfs.quota.size represents the size of the file/directory. When read from brick, the value will be the size of directory on that brick. When read from a cluster translator like dht, it will be the size of that directory across the whole cluster. So, in dht we add up the values from all bricks and set the sum as the value. However, in case of replicate/afr, we just pick the value from any of the subvolume.

I think this part is already solved. I use the maximum value from all bricks (as afr does) and then I scale it depending on the volume configuration. I've made tests and it seems to work well.

Thanks,

Xavi
_______________________________________________
Gluster-devel mailing list
Gluster-devel@xxxxxxxxxxx
http://supercolony.gluster.org/mailman/listinfo/gluster-devel




[Index of Archives]     [Gluster Users]     [Ceph Users]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Security]     [Bugtraq]     [Linux]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux