ACL issue with 3.4.0 GA + XFS + native client?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks Kaushal.

A little more information:

I was never able to update that setting, so I deleted the gluster 
volume, updated to the latest CentOS 6.4 kernel 
(2.6.32-358.14.1.el6.x86_64), and reformatted the underlying XFS file 
system.

After that, I ran a few more tests, and noticed a caching issue whereby 
if I ran an "ls -l" I would sometimes see a + attached to the dir/file 
in question, and sometimes I would not.  See below.

[root at fs-copo05 tmp]# ls -l
total 0
drwxrwxr-x+ 2 root root 40 Jul 29 16:43 1
[root at fs-copo05 tmp]# ls -l
total 0
drwxrwxr-x 2 root root 40 Jul 29 16:43 1
[root at fs-copo05 tmp]# ls -l
total 0
drwxrwxr-x 2 root root 40 Jul 29 16:43 1
[root at fs-copo05 tmp]# ls -l
total 0
drwxrwxr-x 2 root root 40 Jul 29 16:43 1
[root at fs-copo05 tmp]# ls -l
total 0
drwxrwxr-x 2 root root 40 Jul 29 16:43 1
[root at fs-copo05 tmp]# ls -l
total 0
drwxrwxr-x+ 2 root root 40 Jul 29 16:43 1

After that, I set performance.cache-refresh-timeout to 0, and now I see 
the '+' next to the direntry each time I run 'ls -l' and getfacl.

Two things:

*  Is the above behavior expected?
*  What are the performance implications of setting 
performance.cache-refresh-timeout to 0?

Thanks.

On 07/24/2013 10:26 PM, Kaushal M wrote:
> A client which mounts a volume has N connections with the N bricks and
> 1 connection to glusterd. The connection to glusterd is used to get
> the volfiles and the ports of the bricks. For an already mounted
> client this connection will still exist even if the volume was
> stopped. Glusterd checks these connections when trying to set any
> option. This is done to prevent older clients which don't support the
> newer features from getting a newer volfile. If this isn't done, the
> client will fetch the new volfile, try to understand it and fail, and
> continue using the older volfile.
> We could enhance this by telling exactly which clients are causing the
> issue. I'll file a RFE for this.
>
> But, that said stat-prefetch was a part of earlier releases as well,
> and it was being disabled so the 'volume set' command should have gone
> through. I'll check this out and reply here.
>
> Thanks,
> Kaushal
>
> On Thu, Jul 25, 2013 at 2:44 AM, Nicholas Majeran
> <nmajeran at suntradingllc.com> wrote:
>> That would make sense, but when I had the volume stopped, nothing should
>> have been connected to that volume, correct?  The volume set operation still
>> failed with the volume stopped.
>>
>>
>>
>> On 07/24/2013 03:49 PM, Joseph Landman wrote:
>>> It says the clients don't support the ops.  Is it possible to
>>> disconnect them one at a time until the command works?  Might not be
>>> possible but it probably would tell you which client was problematic.
>>> Also these are native client as per the subject, right?
>>>
>>> Sent from my iPad
>>>
>>> On Jul 24, 2013, at 2:52 PM, Nicholas Majeran
>>> <nmajeran at suntradingllc.com> wrote:
>>>
>>>> Even with the volume stopped, I was unable to unset stat-prefetch.
>>>> Anything else I should look at?
>>>>
>>>> On 07/23/2013 02:55 PM, Nicholas Majeran wrote:
>>>>> FWIW, I tried to disable the parameter on a stopped volume, which was
>>>>> successful.  I then started the volume and I could get/set the ACLs
>>>>> normally.
>>>>>
>>>>> I'm going to try the same procedure on the gv0 volume that threw the
>>>>> error previously.
>>>>> Thanks.
>>>>>
>>>>> On 07/23/2013 02:29 PM, Nicholas Majeran wrote:
>>>>>> When I tried to disable that parameter, I receive the following:
>>>>>>
>>>>>> gluster> volume set gv0 stat-prefetch off
>>>>>> volume set: failed: One or more connected clients cannot support the
>>>>>> feature being set. These clients need to be upgraded or disconnected before
>>>>>> running this command again
>>>>>>
>>>>>> AFAICT, all the nodes are at the same version, but I did this this in
>>>>>> the logs I after I ran the command:
>>>>>>
>>>>>> [2013-07-23 19:26:02.673304] E
>>>>>> [glusterd-op-sm.c:370:glusterd_check_client_op_version_support]
>>>>>> 0-management: One or more clients don't support the required op-version
>>>>>> [2013-07-23 19:26:02.673325] E
>>>>>> [glusterd-syncop.c:767:gd_stage_op_phase] 0-management: Staging of operation
>>>>>> 'Volume Set' failed on localhost : One or more connected clients cannot
>>>>>> support the feature being set. These clients need to be upgraded or
>>>>>> disconnected before running this command again
>>>>>> [2013-07-23 19:26:03.590547] E [socket.c:2788:socket_connect]
>>>>>> 0-management: connection attempt failed (Connection refused)
>>>>>> [2013-07-23 19:26:06.591224] E [socket.c:2788:socket_connect]
>>>>>> 0-management: connection attempt failed (Connection refused)
>>>>>> [2013-07-23 19:26:09.591912] E [socket.c:2788:socket_connect]
>>>>>> 0-management: connection attempt failed (Connection refused)
>>>>>> [2013-07-23 19:26:12.592601] E [socket.c:2788:socket_connect]
>>>>>> 0-management: connection attempt failed (Connection refused)
>>>>>> [2013-07-23 19:26:15.593282] E [socket.c:2788:socket_connect]
>>>>>> 0-management: connection attempt failed (Connection refused)
>>>>>> [2013-07-23 19:26:18.593946] E [socket.c:2788:socket_connect]
>>>>>> 0-management: connection attempt failed (Connection refused)
>>>>>>
>>>>>>
>>>>>> On 07/23/2013 01:45 PM, Vijay Bellur wrote:
>>>>>>> On 07/23/2013 09:28 PM, Nicholas Majeran wrote:
>>>>>>>> A little more detail... on files, I still can't get this to work, but
>>>>>>>> when I run ls and {get,set}facl on . it seems to work:
>>>>>>>>
>>>>>>>> [root at s1 tmp]# pwd
>>>>>>>> /mnt/glusterfs/tmp
>>>>>>>> [root at s1 tmp]# ls -ld .
>>>>>>>> drwxrwxr-x+ 4 nmajeran root 4422 Jul 22 15:29 .
>>>>>>>> [root at suncosmbgw1 tmp]# ls -ld ../tmp
>>>>>>>> drwxrwxr-x 4 nmajeran root 4422 Jul 22 15:29 ../tmp
>>>>>>>> [root at s1 tmp]# getfacl .
>>>>>>>> # file: .
>>>>>>>> # owner: nmajeran
>>>>>>>> # group: root
>>>>>>>> user::rwx
>>>>>>>> user:root:rwx
>>>>>>>> user:user1:rwx
>>>>>>>> group::r-x
>>>>>>>> group:g1:rwx
>>>>>>>> group:g2:r-x
>>>>>>>> group:g3:r-x
>>>>>>>> mask::rwx
>>>>>>>> other::r-x
>>>>>>>>
>>>>>>>> [root at s1 tmp]# getfacl ../tmp
>>>>>>>> # file: ../tmp
>>>>>>>> # owner: nmajeran
>>>>>>>> # group: root
>>>>>>>> user::rwx
>>>>>>>> group::rwx
>>>>>>>> other::r-x
>>>>>>>
>>>>>>> Does disabling stat-prefetch address the problem permanently? You can
>>>>>>> disable stat-prefetch via:
>>>>>>>
>>>>>>> gluster volume set <volname> stat-prefetch off
>>>>>>>
>>>>>>> -Vijay
>>>>>> _______________________________________________
>>>>>> Gluster-users mailing list
>>>>>> Gluster-users at gluster.org
>>>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>>> --
>>>> Nick Majeran
>>>> Sun Trading LLC
>>>> 312-229-9608
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users
>>
>> --
>> Nick Majeran
>> Sun Trading LLC
>> 312-229-9608
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://supercolony.gluster.org/mailman/listinfo/gluster-users



[Index of Archives]     [Gluster Development]     [Linux Filesytems Development]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux