I had the same problem after a rebalance (going from 2 bricks to 6 bricks). It took about a week to get everything straightened out (and I reported details on what I did to fix it in this mailing list). I dread the next rebalance (going from 6 bricks to 8 bricks)! For this rollout, I still have five rebalances remaining before I can declare the GlusterFS migration complete. To recap, 1. After a rebalance in which one of my nodes (not the "master", which I initiated the rebalance from) had to be rebooted due to too many open files on system (caused by the rebalance), many files appeared to clients to have 000 or 1000 (---------- or T---------) permissions. Many of these files could not even be chmodded by root over NFS, returning error 576 when I tried. 2. I found that in many cases, files which had this problem had entries on more than two bricks (and my replica count is 2). The entries had different permissions and some were zero-length files. It appears that different clients got different entries at different times, so one might see a file as inaccessible while another could read it without issues. 3. I wrote a script to remove the zero-length files (and their .glusterfs shadow links), and set permissions properly on all the files. Luckily, all the files on my volume have uniform permissions (files are all 0644, directories are all 0755). 4. I ran a find command every ten minutes to find and correct bad permissions. The script in (3) didn't appear to have gotten them all for some reason. 5. No more files have appeared with this problem since August 6th. I'm still running the find every day. 6. After the permissions problems appeared to be resolved, I ran a check to verify that all the files present on the volume before the rebalance were present after the rebalance. Thankfully, the data appears to have all survived. The only feedback I got on this mailing list was that nothing was wrong. On Wed, Aug 21, 2013 at 11:21 PM, Vijay Bellur <vbellur at redhat.com> wrote: > On 08/22/2013 09:12 AM, ??? wrote: > >> Hi Joe thank you but the sticky permissions is exposed to client side >> due to potential bug related to glusterfs rebalance. >> > > > Can you please provide output of ls -l that shows these files after > rebalance? > > -Vijay > >> >> >> 2013/8/20 Joe Julian <joe at julianfamily.org <mailto:joe at julianfamily.org>> >> >> >> Sticky pointers are normal. See the extended attributes on them to >> see where they point, >> >> getfattr -m trusted.* -d $filename >> >> To diagnose your client issue, look in your client log. >> >> >> "???" <yongtaofu at gmail.com <mailto:yongtaofu at gmail.com>> wrote: >> >> Dear gluster experts, >> >> We're running glusterfs 3.3 and we have met file permission >> probelems after gluster volume rebalance. Files got stick >> permissions T--------- after rebalance which break our client >> normal fops unexpectedly. >> Any one known this issue? >> Thank you for your help. >> >> >> -- >> Sent from my Android device with K-9 Mail. Please excuse my brevity. >> >> >> >> >> -- >> ??? >> >> >> ______________________________**_________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://supercolony.gluster.**org/mailman/listinfo/gluster-**users<http://supercolony.gluster.org/mailman/listinfo/gluster-users> >> >> > ______________________________**_________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://supercolony.gluster.**org/mailman/listinfo/gluster-**users<http://supercolony.gluster.org/mailman/listinfo/gluster-users> -- Justin Dossey CTO, PodOmatic -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130822/a6b838a3/attachment.html>