I'm currently experimenting with gluster in a way that may be an unusual use case. We have a single system with 45 cheap sata drives which we use for disk-to-disk backups only. For the past two years we have been using this system with linux software raid, with the drives organized as multiple raid 5/6/10 sets of 5 drives per set. This has worked ok but we've suffered through enough multiple simultaneous drive failures to prompt me to explore alternatives to raid. Yes, I know, that's what we get for using cheap drives. What I'm experimenting with now is creating gluster distributed-replicated volumes on some of these drives. At this point I am using 10 drives configured as shown here: Volume Name: volume1 Type: Distributed-Replicate Status: Started Number of Bricks: 5 x 2 = 10 Transport-type: tcp Bricks: Brick1: host:/gluster/brick01 Brick2: host:/gluster/brick06 Brick3: host:/gluster/brick02 Brick4: host:/gluster/brick07 Brick5: host:/gluster/brick03 Brick6: host:/gluster/brick08 Brick7: host:/gluster/brick04 Brick8: host:/gluster/brick09 Brick9: host:/gluster/brick05 Brick10: host:/gluster/brick10 Options Reconfigured: auth.allow: 127.0.0.1,10.10.10.10 host is 10.10.10.10, and host is both the gluster server and client. For the most part this is working fine so far. The problem I have run into several times now is that when a drive fails and does not come back online when the system is rebooted, the volume comes up without that brick. Gluster then happily writes to the missing brick's mount point, thereby eventually filling up the root filesystem. Once the root filesystem is full and processes writing to gluster space are hung, I can never recover from this state without rebooting. Is there any way to avoid this problem of gluster writing to a brick path that isn't really populated by the intended brick filesystem? Does gluster not create any sort of signature or meta-data that indicates whether or not a path is really a gluster brick? I realize that ultimately I should get the bricks replaced as soon as possible but there may be times when I want to continue running for some time with a "degraded" volume if you will. All ideas, suggestions, comments, criticisms are welcome. Thanks, Todd