Thanks. That makes it understandable. I have trash working now. For the sake of feedback, I am running a single spec file (both server and client in one process), with two afr'ed bricks in a unify cluster with another non-afr'ed brick, and the namespace is being afr'ed to all hosts (3 for now). Very easy with gluster! The idea is to do away with NFS and create a large unified cluster of afr'ed bricks across our network by using all hosts as servers and clients. Wish list: can gluster be made to reload it's spec file rather needing a full umount/mount? This would allow "on-the-fly" expansion of a unified cluster when new hosts are brought in or storage expanded. Probably not with fuse? Dallas On 2008-08-27, Amar S. Tumballi wrote: > Sorry for lack of documentation regarding that, > > your spec file is fine, except, trash takes directory relative to glusterfs > path (ie, "option trash-dir /.trashcan" will be > /exports/mastersd-project/.trashcan in your case. > > Regards, > > 2008/8/27 Dallas Masters <dallas.masters at gmail.com> > > > Can someone give an example of using trash in a spec file? I assumed it > > should be just below the posix translator (as below), but this is not > > working > > as I expect. > > > > volume mastersd-project-posix > > type storage/posix > > option directory /exports/mastersd-project > > end-volume > > > > volume mastersd-project-trashcan > > type features/trash > > option trash-dir /exports/.mastersd-project-trashcan > > subvolumes mastersd-project-posix > > end-volume > > > > volume mastersd-project-posix-locks > > type features/posix-locks > > option mandatory on > > subvolumes mastersd-project-trashcan > > end-volume > > ... > > > > Dallas > > > > _______________________________________________ > > Gluster-users mailing list > > Gluster-users at gluster.org > > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users