Re: [PATCH v6 0/7] fs/dcache: Track & limit # of negative dentries

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 07/11/2018 06:21 AM, Michal Hocko wrote:
> On Tue 10-07-18 12:09:17, Waiman Long wrote:
>> On 07/10/2018 10:27 AM, Michal Hocko wrote:
>>> On Mon 09-07-18 12:01:04, Waiman Long wrote:
>>>> On 07/09/2018 04:19 AM, Michal Hocko wrote:
> [...]
>>>>> percentage has turned out to be a really wrong unit for many tunables
>>>>> over time. Even 1% can be just too much on really large machines.
>>>> Yes, that is true. Do you have any suggestion of what kind of unit
>>>> should be used? I can scale down the unit to 0.1% of the system memory.
>>>> Alternatively, one unit can be 10k/cpu thread, so a 20-thread system
>>>> corresponds to 200k, etc.
>>> I simply think this is a strange user interface. How much is a
>>> reasonable number? How can any admin figure that out?
>> Without the optional enforcement, the limit is essentially just a
>> notification mechanism where the system signals that there is something
>> wrong going on and the system administrator need to take a look. So it
>> is perfectly OK if the limit is sufficiently high that normally we won't
>> need to use that many negative dentries. The goal is to prevent negative
>> dentries from consuming a significant portion of the system memory.
> So again. How do you tell the right number?

I guess it will be more a trial and error kind of adjustment as the
right figure will depend on the kind of workloads being run on the
system. So unless the enforcement option is turned on, setting a limit
that is too small won't have too much impact over than a slight
performance drop because of the invocation of the slowpaths and the
warning messages in the console. Whenever a non-zero value is written
into "neg-dentry-limit", an informational message will be printed about
what the actual negative dentry limits
will be. It can be compared against the current negative dentry number
(5th number) from "dentry-state" to see if there is enough safe margin
to avoid false positive warning.

>
>> I am going to reduce the granularity of each unit to 1/1000 of the total
>> system memory so that for large system with TB of memory, a smaller
>> amount of memory can be specified.
> It is just a matter of time for this to be too coarse as well.

The goal is to not have too much memory being consumed by negative
dentries and also the limit won't be reached by regular daily
activities. So a limit of 1/1000 of the total system memory will be good
enough on large memory system even if the absolute number is really big.

Cheers,
Longman


--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux