During the drain failure (described elsewhere, may not be relevant) one of the other disk arrays unexpectedly died. This system has no replication. I stopped gluster on the host, rebuilt a new array (goodbye data) and restarted gluster. I then removed the brick. I had to rename the mount point to add it back (even shutting gluster down completely on all hosts didn't help; and yes I was using the setfattr procedure), but OK, it is back. Empty, but that's life. However, when I remount the filesystem on one of the clients, I get a mix of success and failure, as shown below. Is this because the old defunct brick's state was saved somehow? (The fact that I had to change the mount point is suggestive.) Is there anything I can do about this? thanks, James Bellinger # ls -l /data/uwa/jvansanten/sim ls: cannot access /data/uwa/jvansanten/sim/2011: Invalid argument total 265 drwxrwxr-x 4 jvansanten jvansanten 45056 Jul 19 11:51 2010 ?????????? ? ? ? ? ? 2011 drwxrwxr-x 4 jvansanten jvansanten 45056 Sep 15 03:10 dags drwxrwxr-x 2 jvansanten jvansanten 45056 Sep 15 04:13 dags.cobaltgpu drwxrwxr-x 2 jvansanten jvansanten 45056 Sep 15 05:06 dags.muongun drwxrwxr-x 3 jvansanten jvansanten 45056 Jul 18 22:22 generated drwxrwxr-x 5 jvansanten jvansanten 45056 Jul 19 11:51 logs -rw-rw-r-- 1 jvansanten jvansanten 153 Jul 26 21:13 sweep.sh