> Yup, pretty common for us. Once we hit ~90% on either of our two > production clusters (107 TB usable each), performance takes a beating. > > I don't consider this a problem, per se. Most file systems (clustered > or otherwise) are the same. I consider a high water mark for any > production file system to be 80% (and I consider that vendor > agnostic), at which time action should be taken to begin clean up. > That's good sysadminning 101. I can't think of a good reason for such a steep drop-off in GlusterFS. Sure, performance should degrade somewhat due to fragmenting, but not suddenly. It's not like Lustre, which would do massive preallocation and fall apart when there was no longer enough space to do that. It might be worth measuring average latency at the local-FS level, to see if the problem is above or below that line. _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users