Not an issue for us, were at 92% on an 800TB distributed volume, 16 bricks spread across 4 servers. Lookups can be a bit slow but raw IO hasn't changed. On Tue, 2014-10-07 at 09:16 +1000, Dan Mons wrote: > On 7 October 2014 08:56, Jeff Darcy <jdarcy@xxxxxxxxxx> wrote: > > I can't think of a good reason for such a steep drop-off in GlusterFS. > > Sure, performance should degrade somewhat due to fragmenting, but not > > suddenly. It's not like Lustre, which would do massive preallocation > > and fall apart when there was no longer enough space to do that. It > > might be worth measuring average latency at the local-FS level, to see > > if the problem is above or below that line. > > Happens like clockwork for us. The moment we get alerts saying the > file system has hit 90%, we get a flood of support tickets about > performance. > > It happens to a lesser degree on standard CentOS NAS units running XFS > we have around the place. But again, I see the same sort of thing on > any file system (vendor supplied, self-built, OS and FS agnostic). > And yes, it's measurable (Munin graphs show it off nicely). > > -Dan > _______________________________________________ > Gluster-users mailing list > Gluster-users@xxxxxxxxxxx > http://supercolony.gluster.org/mailman/listinfo/gluster-users _______________________________________________ Gluster-users mailing list Gluster-users@xxxxxxxxxxx http://supercolony.gluster.org/mailman/listinfo/gluster-users