Re: Tuning NFS client write pagecache

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We typically use 100MB/1GbE....and the server Storage is
SATA/SCSI...for IOPs i have not really measured  the NFS client
performance to tell you the exact number, and we use write size
4k/8ks...MTU size of the link is 1500 bytes...

But we got noticeable uniform throughput(without a bursty traffic),
and overall performance   when we hand-code NFS RPC
operations(including MOUNT to get the root File handle) and send to
server, that wrote all data to server at the NFS interface.(a sort of
directNFS from the user space)..without going through kernel mode VFS
interface of  NFS client driver. I was just wondering to get the same
performance on native nfs client...

Its still a matter of opinion about what  control we should give to
applications and what OS should control.....!!

As we test more, i can send you more test data about this ..

Finally applications will end up re-invent the wheel to suits it
special needs :-)

How does ORACLE's directNFS deal this ?

thanks chuck for your thoughts !

On Wed, Aug 11, 2010 at 9:35 PM, Chuck Lever <chuck.lever@xxxxxxxxxx> wrote:
> [ Trimming CC: list ]
>
> On Aug 10, 2010, at 8:09 PM, Peter Chacko wrote:
>
>> Chuck,
>>
>> Ok i will then check to see the command line option to request the DIO
>> mode for NFS, as you suggested.
>>
>> yes i other wise I fully understand the need of client caching.....for
>> desktop bound or any general purpose applications... AFS, cacheFS are
>> all good products in its own right.....but the only problem in such
>> cases are cache coherence issues...(i mean other application clientss
>> are not guaranteed to get the latest data,on their read) ..as NFS
>> honor only open-to-close session semantics.
>>
>> The situation i have is that,
>>
>> we have a data protection product, that has agents on indvidual
>> servers and a  storage gateway.(which is an NFS mounted box). The only
>> purpose of this box is to store all data, in a streaming write
>> mode.....for all the data coming from 10s of agents....essentially
>> this acts like a VTL target....from this node, to NFS server node,
>> there is no data travelling in the reverse path (or from the client
>> cache to the application).
>>
>> THis is the only use we put NFS under....
>>
>> For recovery, its again a streamed read...... we never updating the
>> read data, or re-reading the updated data....This is special , single
>> function box.....
>>
>> What do you think the best mount options for this scenario ?
>
> What is the data rate (both IOPS and data throughput) of both the read and write cases?  How large are application read and write ops, on average?  What kind of networking is deployed?  What is the server and clients (hardware and OS)?
>
> And, I assume you are asking because the environment is not performing as you expect.  Can you detail your performance issues?
>
> --
> Chuck Lever
> chuck[dot]lever[at]oracle[dot]com
>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux