MDS network utilization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I am trying to do some testing of SPNFS NFS4.1 implementation and I came across the following problem. When copying a new file to NFS4.1 cluster from a client, the network utilization of the MDS client is unexpectedly high. The test case and test-be is following:

1 MDS, three DSes + 1 client, using FC13, 2.6.34.7-58.pnfs35.2010.09.14.fc13.i686.PAE kernel and nfs-utils-1.2.2-4.pnfs.fc13.i686.

I copy a new 4GB file (dd if=/dev/zero of=/nfs41/zero.file5 bs=4M count=1000) to the cluster and measuring number of bytes transfered on each server. While DSes show about 1.4GB of incoming traffic, and the client shows 4.2GB of outcoming traffic (which is reasonable), MDS numbers (that I expected to be negligible) show 5.1G In and 4.5G Out --> it seems like the data flow through the MDS.

Furthermore, when rewriting the same 4GB file with another 4GB bunch of data, the numbers are even worse:
MDS: 8.0G In / 4.7G out
DSes: 1.5G In / 1.3G out
Client: ~0.1 In / 4.2 out

I might be doing something wrong, but I don't see it. My goal is to do some preliminary performance tests of NFS4.1, but maybe it is not yet a time to do that?

Cheers
Jiri Horky


$cat /etc/spnfsd.conf
[General]

Verbosity = 1
Stripe-size = 8192
Dense-striping = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
DS-Mount-Directory = /pnfs

[DataServers]
NumDS = 3
....
.....
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux