On Tue, 2005-10-04 at 19:51 +0200, Arjan van de Ven wrote: > Think of it this way: if you have half your disk empty, the filesystem > can do a proper job of finding non-fragmented space. > If only 0.0001% is free, it has almost no freedom of choice, resulting > in "you get it in whatever order some things become free". > Those are sort of extremes; there's been a bunch of research and the > outcome was that 5% free seems to be sort of the turning point in this > respect. > > I suspect that research predates the Tb sized volumes, so I don't know > if it maybe is 1% on such volumes, but then again to some extend the > freedom needed will scale with the FS size Obviously I've not done real research on the subject, but wouldn't you expect it to scale according to the write pattern when you're under low space pressure? Or, to put it differently, if I've got a 100MB disk with 5MB free and I'm trying to write a 4kB file, the fs has a similar degree of freedom to the case of a 100GB disk with 5GB free where I'm trying to do a 4MB write, doesn't it? Whereas obviously it has much less freedom on the smaller disk with a 4MB write. It seems like the necessary free space percentage varies according to your data and IO pattern rather than according to the FS size. (Granted, the requirements of people who have terabytes of storage probably dictate larger files, larger IOs, and lower relative latencies than that of people with mere tens of gigs, so I would expect some *correlation* to filesystem size.) -- Peter -- fedora-test-list mailing list fedora-test-list@xxxxxxxxxx To unsubscribe: https://www.redhat.com/mailman/listinfo/fedora-test-list