Quota problems with Gluster3.3b2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



  Hello Saurabh,

  Sorry for the long delay getting back to you, and thank you for 
replying to me!

  To reproduce this, I'm doing a simple st command like, I'm doing 
nothing in parallel:
st -A http://IP:80/auth/v1.0 -U r2:user -K pass upload test manual.txt

  If I do
/usr/local/sbin/gluster volume quota r2 disable
  the command succeeds. But if I do:
/usr/local/sbin/gluster volume quota r2 enable
  the command hangs with the permission error that I described earlier.

  My volume info:
# gluster volume info r2

Volume Name: r2
Type: Distributed-Replicate
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 192.168.4.103:/gluster/disk1
Brick2: 192.168.4.103:/gluster/disk2
Brick3: 192.168.4.103:/gluster/disk3
Brick4: 192.168.4.103:/gluster/disk4
Brick5: 192.168.4.103:/gluster/disk5
Brick6: 192.168.4.103:/gluster/disk6
Brick7: 192.168.4.103:/gluster/disk7
Brick8: 192.168.4.103:/gluster/disk8
Brick9: 192.168.4.103:/gluster/disk9
Brick10: 192.168.4.103:/gluster/disk10
Brick11: 192.168.4.103:/gluster/disk11
Brick12: 192.168.4.103:/gluster/disk12
Options Reconfigured:
performance.cache-size: 6GB
cluster.stripe-block-size: 1MB
features.quota: on

  Thanks in advance,
Daniel

On 1/16/12 9:29 PM, Saurabh Jain wrote:
> Hello Daniel,
>
>     I am trying to reproduce the problem, meanwhile I request you to update me with the "volume info" and the sequence of steps you are trying. As, for me it didn't fail when quota is enabled. Also, mention are you trying to run the operations in parallel.
>
>
> Thanks,
> Saurabh
>
>    Hi everyone,
>
>    I'm playing with Gluster3.3b2, and everything is working fine when
> uploading stuff through swift. However, when I enable quotas on Gluster,
> I randomly get permission errors. Sometimes I can upload files, most
> times I can't.
>
>    I'm mounting the partitions with the acl flag, I've tried wiping out
> everything and starting from scratch, same result. As soon as I disable
> quotas everything works great. I don't even need to add any limit-usage
> for the errors to crop up.
>
>    Any idea?
>
> Daniel
>
>
>
>    Relevant info:
>
> =========================
>    To enable quotas I use the following commands:
>
> # /usr/local/sbin/gluster volume quota r2 enable
> Enabling quota has been successful
>
> # /usr/local/sbin/gluster volume quota r2 list
> Limit not set on any directory
>
> # /usr/local/sbin/gluster volume quota r2 limit-usage /test 10GB
> limit set on /test
>
> # /usr/local/sbin/gluster volume quota r2 list
>       path          limit_set         size
> ----------------------------------------------------------------------------------
> /test                      10GB               88.0KB
>
> # /usr/local/sbin/gluster volume quota r2 disable
> Disabling quota will delete all the quota configuration. Do you want to
> continue? (y/n) y
> Disabling quota has been successful
>
> =========================
>    Directory listing:
> ls -la *
> test:
> total 184
> drwxrwxrwx 2 user user 24576 Jan 13 12:07 .
> drwxrwxrwx 5 user user 24576 Jan 13 12:03 ..
> -rw------- 1 user user 82735 Jan 13 12:07 manual.txt
>
> tmp:
> total 96
> drwxrwxrwx 2 user user 24576 Jan 13 12:07 .
> drwxrwxrwx 5 user user 24576 Jan 13 12:03 ..
>
> ==========================
> Gluster logs:
> Unsuccessful write:
>
> [2012-01-13 12:06:27.97140] I [afr-common.c:1225:afr_launch_self_heal]
> 0-r2-replicate-4: background  entry self-heal triggered. path: /tmp
> [2012-01-13 12:06:27.97704] I
> [afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
> 0-r2-replicate-4: background  entry self-heal completed on /tmp
> [2012-01-13 12:06:27.102813] I [afr-common.c:1225:afr_launch_self_heal]
> 0-r2-replicate-4: background  entry self-heal triggered. path: /test
> [2012-01-13 12:06:27.103199] I
> [afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
> 0-r2-replicate-4: background  entry self-heal completed on /test
> [2012-01-13 12:06:27.106876] E
> [stat-prefetch.c:695:sp_remove_caches_from_all_fds_opened]
> (-->/usr/local/lib/glusterfs/3.3beta2/xlator/mount/fuse.so(fuse_setxattr_resume+0x148)
> [0x2acd7b862118]
> (-->/usr/local/lib/glusterfs/3.3beta2/xlator/debug/io-stats.so(io_stats_setxattr+0x15f)
> [0x2aaaae8cf71f]
> (-->/usr/local/lib/glusterfs/3.3beta2/xlator/performance/stat-prefetch.so(sp_setxattr+0x6c)
> [0x2aaaae6bc3fc]))) 0-r2-stat-prefetch: invalid argument: inode
> [2012-01-13 12:06:27.164168] I
> [client3_1-fops.c:1999:client3_1_rename_cbk] 0-r2-client-8: remote
> operation failed: Permission denied
> [2012-01-13 12:06:27.164211] I
> [client3_1-fops.c:1999:client3_1_rename_cbk] 0-r2-client-9: remote
> operation failed: Permission denied
> [2012-01-13 12:06:27.164227] W [dht-rename.c:480:dht_rename_cbk]
> 0-r2-dht: /tmp/tmpyhBbAD: rename on r2-replicate-4 failed (Permission
> denied)
> [2012-01-13 12:06:27.164855] W [fuse-bridge.c:1351:fuse_rename_cbk]
> 0-glusterfs-fuse: 706: /tmp/tmpyhBbAD ->  /test/manual.txt =>  -1
> (Permission denied)
> [2012-01-13 12:06:27.166115] I
> [client3_1-fops.c:1999:client3_1_rename_cbk] 0-r2-client-8: remote
> operation failed: Permission denied
> [2012-01-13 12:06:27.166142] I
> [client3_1-fops.c:1999:client3_1_rename_cbk] 0-r2-client-9: remote
> operation failed: Permission denied
> [2012-01-13 12:06:27.166156] W [dht-rename.c:480:dht_rename_cbk]
> 0-r2-dht: /tmp/tmpyhBbAD: rename on r2-replicate-4 failed (Permission
> denied)
> [2012-01-13 12:06:27.166763] W [fuse-bridge.c:1351:fuse_rename_cbk]
> 0-glusterfs-fuse: 707: /tmp/tmpyhBbAD ->  /test/manual.txt =>  -1
> (Permission denied)
>
> Successful write:
> [2012-01-13 12:07:02.49562] I [afr-common.c:1225:afr_launch_self_heal]
> 0-r2-replicate-4: background  entry self-heal triggered. path: /test
> [2012-01-13 12:07:02.50013] I
> [afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
> 0-r2-replicate-4: background  entry self-heal completed on /test
> [2012-01-13 12:07:02.52255] I [afr-common.c:1225:afr_launch_self_heal]
> 0-r2-replicate-4: background  entry self-heal triggered. path: /tmp
> [2012-01-13 12:07:02.52832] I
> [afr-self-heal-common.c:2022:afr_self_heal_completion_cbk]
> 0-r2-replicate-4: background  entry self-heal completed on /tmp
>
>
>
>
>
>
>



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux