With RHL 7.3 and later, NFS will loop mount image files rather than
download then unpack them. I would imagine that the same is done for
some of the /RedHat/base image files which are usually not small. I
know for sure that whole ISO images can be done this way too.
I mostly do NFS installations as well. It seems that since an NFS mount
is very similar to a local file system, you cut out on some time spent
on retrieving files first before attempting to figure out what action
should be taken on the file.
Some general throuput troubleshooting:
When experiencing network slowdowns, one should verify that they are
actually getting the speed they should be getting. Create a large file
on one of your boxes, then use wget to download that file. wget will
give you some nice average throughput speeds. You might be getting
10Mbit when you think you are supposed to be getting 100Mbit. Then it's
time to check the hub/switch, cables, NIC driver, NIC, and anything else
between you and the destination system, or try the test with a different
box to get a different perspective.
If wget shows around 850K/s then you are likely on a 10 Mbit connection.
If wget shows around 9MB/s to 12MB/s then you are likely on a 100 Mbit
connection.
Sincerely,
Richard Black
John M Beamon wrote:
I've done most of my installs over NFS. I find that during FTP and
HTTP installs, the installer will first retrieve each package to the
local system before extracting it. I don't find this to be the case
in network-mounted installations. I've done http and ftp, and they're
more cooperative in terms of open friendly network ports and
simplicity of setup, but I find they're both slower. If you're going
to install a dozen boxes with software development RPMS, you're going
to have all those extra, large packages moved across the network
individually and all that portmap and file locking and whatnot. It's
not "bad", but I do tend to find NFS is fastest.