2GB of Waste? How can it be?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



--qlTNgmc+xy1dBmNv
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, Feb 05, 2002 at 01:26:29AM -0000, matt dickinson wrote:
> <snip>
> > That requires changing very sensitive data, and could easily=20
> > result in data
> > corruption. You'd also increase fragmentation.
> </snip>
>=20
> Is it possible to defrag(ment)?

Utilities exist for such a purpose (look on http://freshmeat.net), but
little purpose exists for them. Unix filesystems by their very nature
resist fragmentation very strongly. It is not uncommon for even a very full,
very heavily used filesystem to keep its fragmentation under 15%. Also,
because of advanced read-ahead, disk buffers, and file caches, Unix also
tends to be much less affected by fragmentation than, say, M$ Windoze of
any variety.

--=20
-- Skylar Thompson (skylar@attglobal.net)

--qlTNgmc+xy1dBmNv
Content-Type: application/pgp-signature
Content-Disposition: inline

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE8X0fwnMU1m27tfjARApDVAJwOeGuhkzB0WHxQ6HYOB7W/kAPPYQCeMS9P
WIgajdc/xeBYsWN0ZQrsQkE=
=a/ZH
-----END PGP SIGNATURE-----

--qlTNgmc+xy1dBmNv--





[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux