Hi, Many thanks for the in depth explanation! I understand this as (put simply) the problem is not in FUSE, nor really Gluster, but in the need to support the combination of the native and NFS protocols. Two thoughts (free ideas to take or throw out at will, I've not had the chance to even think of looking into the code yet): A) how about an option to set it at the volume level if the administrator knows they will use one or the other protocol exclusively (ex: set the flag via configuration, or if NFS were to be disabled)? Seems like a potential quick fix, but probably not the best long term. B) how about bumping the protocol but making the new server version compatible with the previous? I'm thinking about: - Gluster client C would initiate the connection with version X of the protocol, with the server A; server A sets an internal connection state/flag that the protocol used is Gluster v.X / NFS / CIFS ... - Situation 1: Server A detects the file needed is on a brick on server B, therefore forwarding a request to server B via protocol Gluster v.X+1. Gluster B detects protocol Gluster v.X+1, and reads a flag about the protocol that was used. - Situation 2: Server A detects the file needed is on a brick it controls, therefore using permissions model as needed. I assume that the protocol version is embedded in the requests or at least connection requests... the flag could be based on the protocol used to access the first server, or could be based on "file permission check necessary or not". Once again, thanks for the information and the work of all the devs! Andrew On Thu, Jan 20, 2011 at 9:28 PM, Anand Avati <anand.avati at gmail.com> wrote: > The problem is this - > FUSE already does the right permission (including aux groups) check before > forwarding requests to gluster. But it does not forward the list of aux > groups along with the request to gluster. > NFS expects gluster to do permission checks (even though the NFS client does > some obvious checks, it is still up to the server side to guarantee > permission checks). > Gluster bricks, today, have no way to differentiate between an NFS request > and a FUSE request. access-control translator must kick in for NFS requests > only. To bring about differentiation in calls, we would have to bump up > protocol version. In the mean time we are figuring out an non-intrusive way > to overload some other field and yet be backward compatible with > non-overloaded clients. > Avati > On Wed, Jan 19, 2011 at 3:19 AM, Andrew S?guin <aseguo at gmail.com> wrote: >> >> Hello (again), >> >> I write after making "adjustments" to my hardware (from a single >> Caviar green to a raid array of Caviar blacks). I would still love >> feedback about users with smaller systems for storing user home >> directories in my range of hardware, as to expectations/sizing, etc. >> (maybe in response to my mail from yesterday?) >> >> One concrete problem I'm trying to resolve at the moment is with file >> system permissions. In short, a user with gid=X and groups=Y,Z can >> work in folders that are owned by group X and permissions 770, but can >> not read/list the contents of a directory owned by group Y and >> permissions 770. >> >> The clients mount glusterfs directly. I double checked permissions on >> the gluster underlying brick's file systems and via the gluster mount; >> made sure there are no ACLs (via getfacl) -- not sure what to check >> else? After a bit of googling I found only a mail to this mailling >> list from December without answer >> (http://www.mail-archive.com/gluster-users at gluster.org/msg04478.html). >> >> In the end, I managed to find the bug report >> http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2183, which >> (per the comments) is still open in 3.1.2 -- but maybe it can't even >> be fixed at all by the Gluster devs (if it's really FUSE that is the >> problem as I understood from the comments)? >> >> Two questions then: >> - Is it really a FUSE or a Gluster issue that the additional groups aren't >> read? >> >> - Thinking about switching to NFS on the clients, does the built-in >> NFS service create more or less work for a distributed replicated >> system (due to a shifting in the responsibility for the >> synchronization?) on the servers? I'd tend to think yes - but if yes, >> relatively speaking, how much? (at peaks during synchronization, I've >> seen upto overall 50% CPU use ?of a dual quad-core Xeon system). >> >> Input is welcome! >> >> Andrew >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > >