On Sun, 8 Jan 2006, Bill Rugolsky Jr. wrote:
Shrinking, on the other hand, presents a host of problems, including
the need to compact the data below the new boundary, and deal with
current users of the filesystem that may, e.g., have a region of
some file memory-mapped, or may have I/O in flight.
what about if you're shrinking the filesystem to a point where nothing
is/has ever been used/mapped and where no data needs to be compacted?
eg a 100gb filesystem where only 50gb has _ever_ been used.
But nobody ever bothered to write the userland code for an online
defragmenter.
This is a major advantage microsoft has with NTFS over linux :-(
-Dan
--
fedora-test-list mailing list
fedora-test-list@xxxxxxxxxx
To unsubscribe:
https://www.redhat.com/mailman/listinfo/fedora-test-list