Nfs server per mount point IOPS limit

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I order to get performance stats with Linux nfs server and client, I ran 
following experiments:

Linux Nfs Server version 3: exporting ext3 filesystem directory.
Linux Nfs Client :  64 IO threads doing IO on nfs mount point .

I am doing random reads on 30GB sparse file (emty), to figure out how much 
max IOPS this IO stack supports.

I observed 30K random read IOPS using single client IO process. Nfs server 
is configured to spawn 64 threads, but not all nfs threads are busy in this 
case.

In order to maxout nfs server thread usage I spawn another IO process on the 
client doing IO though same mount point, but cumulative read IOPS stays same 
(15K IOPS per IO process)

But when I mount same export on another mount point on the same client, IOPS 
scale.
ie.

Nfs server exports /share/export directory.

NfsClient mounts:
MountPoint1: mount server:/share/export at  /mnt/exportMount1 
MountPoint2: mount server:/share/export at  /mnt/exportMount2

IOProcess1 does IO on MountPoint1 while IOProcess2 does IO on MountPoint2, 
reads in this case
scale and nfs server CPU usage doubles.

But if 2 IOProcesses use same mount point for IO then IOPS for each process 
drops to half.

Please advise if  I am hitting some per mount point limit in this case.
Also how can I figure out if bottleneck is on nfs client side or nfs server 
side.

Thanks & Regards,
Deepak

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux