RE: problem with nfs latency during high IO

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Yes, "may have deleterious effects on workload performance", is true.  Especially the "may" part.  Many applications simply open a file, write it out, and then close it.  The aggressive caching really only helps when an application is working within an open file, where the working set fits within the cache, for an extended period.  Otherwise, the client may as well start flushing as soon as it can generate full WRITE requests and as long as it is doing that, it may as well throttle the application to match the rate in which pages can be cleaned.

This will help with overall performance by not tying down memory to hold dirty pages which will not get touched again until they are clean.  It can also greatly help the "ls -l" problem.

		ps


-----Original Message-----
From: linux-nfs-owner@xxxxxxxxxxxxxxx [mailto:linux-nfs-owner@xxxxxxxxxxxxxxx] On Behalf Of Chuck Lever
Sent: Wednesday, March 16, 2011 9:25 AM
To: Judith Flo Gaya
Cc: linux-nfs@xxxxxxxxxxxxxxx
Subject: Re: problem with nfs latency during high IO


On Mar 16, 2011, at 7:45 AM, Judith Flo Gaya wrote:

> Hello Chuck,
> 
> On 03/15/2011 11:10 PM, Chuck Lever wrote:
>> 
>> On Mar 15, 2011, at 5:58 PM, Judith Flo Gaya wrote:
>> 
>>> 
>>> I saw that the value was 20, I don't know the impact of changing the number by units or tens... Should I test with 10 or this is too much? I assume that the behavior will change immediately right?
>> 
>> I believe the dirty ratio is the percentage of physical memory that can be consumed by one file's dirty data before the VM starts flushing its pages asynchronously.  Or it could be the amount of dirty data allowed across all files... one file or many doesn't make any difference if you are writing a single very large file.
>> 
>> If your client memory is large, a small number should work without problem.  One percent of a 16GB client is still quite a bit of memory.  The current setting means you can have 20% of said 16GB client, or 3.2GB, of dirty file data on that client before it will even think about flushing it.  Along comes "ls -l" and you will have to wait for the client to flush 3.2GB before it can send the GETATTR.
>> 
>> I believe this setting does take effect immediately, but you will have to put the setting in /etc/sysctl.conf to make it last across a reboot.
>> 
> 
> I made some tests with a value of 10 for the vm_dirty_ratio and indeed the ls-hang-time has decreased a lot, from 3min avg to 1.5min.
> I was wondering what is the minimum number that it is safe to use? I'm sure that you have already dealt with the side-effects/collateral damages of this action,  I don't want to fix a problem creating another one..

As I said before, you can set it to 1, and that will mean background flushing kicks in at 1% of your client's physical memory.  I think that's probably safe nearly anywhere, but it may have deleterious effects on workload performance.  You need to test various settings with your workload and your clients to see what is the best setting in your environment.

> Regarding the modification of the inode.c file, what do you think that will be the next step? And how can I apply it to my system? Should I modify the file by myself and recompile the kernel to have the changed applied?

I recommend that you file a bug against Fedora 14.  See http://bugzilla.redhat.com/ .

-- 
Chuck Lever
chuck[dot]lever[at]oracle[dot]com




--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Filesystem Development]     [Linux USB Development]     [Linux Media Development]     [Video for Linux]     [Linux NILFS]     [Linux Audio Users]     [Yosemite Info]     [Linux SCSI]

  Powered by Linux