Is there anything I can do to repair this once it happens? Renaming the bad directory and recopying those files onto the volume seems to do it, but is not ideal of course. What's the 3.3 -> 3.4 upgrade path like? Is it possible to upgrade one machine at a time or will I need to set up an entire new volume and transfer my data? - brian On 9/25/13 1:46 AM, Lalatendu Mohanty wrote: > On 09/25/2013 02:45 AM, Brian Cipriano wrote: >> Hi all - >> >> We're running a 3-node distributed volume, using gluster 3.3.1. >> >> We're seeing a rare but repeated issue, where files are written to >> the volume, appears to be written OK, but are not accessible via NFS >> or the gluster client. These files appear when we inspect the bricks >> directly. >> >> For example, the path in question is >> /site/927/Volumes/zero/job/ZY/client_projects/site/58/project. >> >> Via NFS: >> >> $ ls >> /projects/site/927/Volumes/zero/job/ZY/client_projects/site/58/project/ >> ls: cannot access >> /projects/site/927/Volumes/zero/job/ZY/client_projects/site/58/project/: >> Invalid argument >> $ ls /projects/site/927/Volumes/zero/job/ZY/client_projects/site/58 >> ls: cannot access >> /projects/site/927/Volumes/zero/job/ZY/client_projects/site/58/project: >> Invalid argument >> project >> $ ls >> /projects/site/927/Volumes/zero/job/ZY/client_projects/site/58/project/ >> workspace.mel >> >> The directory doesn't show up at all, then eventually displays >> workspace.mel. >> >> On gluster01, via gluster client: >> >> $ ls >> /projects/site/927/Volumes/zero/job/ZY/client_projects/site/58/project/ >> workspace.mel >> >> Many more files were written to that directory. When I inspect the >> bricks I see them (brick0002 is one of the bricks in this volume): >> >> $ ls >> /gluster/brick0002/site/927/Volumes/zero/job/ZY/client_projects/site/58/project/sourceimages/ >> ball environment lights paper_package.jpg pinbal_upper.jpg >> room_v317projection_capture scratches >> >> This has happened for a few (2-5) directories, but most directories >> are working fine. This is a fairly high-activity volume with >> thousands of written files per day, so the rate of incidence is not >> very high, but still concerning. >> >> Any idea what's going on, any suggestions? Any more info I can provide? >> >> Thanks, >> > Sounds like a bug. Never seen this issue with gluster 3.4. > > -Lala