Re: [Bug 196157] New: 100+ times slower disk writes on 4.x+/i386/16+RAM, compared to 3.x

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri 23-06-17 10:44:36, Alkis Georgopoulos wrote:
> Στις 23/06/2017 10:13 πμ, ο Michal Hocko έγραψε:
> >On Thu 22-06-17 12:37:36, Andrew Morton wrote:
> >
> >What is your dirty limit configuration. Is your highmem dirtyable
> >(highmem_is_dirtyable)?
> >
> >>>This issue happens on systems with any 4.x kernel, i386 arch, 16+ GB RAM.
> >>>It doesn't happen if we use 3.x kernels (i.e. it's a regression) or any 64bit
> >>>kernels (i.e. it only affects i386).
> >
> >I remember we've had some changes in the way how the dirty memory is
> >throttled and 32b would be more sensitive to those changes. Anyway, I
> >would _strongly_ discourage you from using 32b kernels with that much of
> >memory. You are going to hit walls constantly and many of those issues
> >will be inherent. Some of them less so but rather non-trivial to fix
> >without regressing somewhere else. You can tune your system somehow but
> >this will be fragile no mater what.
> >
> >Sorry to say that but 32b systems with tons of memory are far from
> >priority of most mm people. Just use 64b kernel. There are more pressing
> >problems to deal with.
> >
> 
> 
> 
> Hi, I'm attaching below all my settings from /proc/sys/vm.
> 
> I think that the regression also affects 4 GB and 8 GB RAM i386 systems, but
> not in an exponential manner; i.e. copies there are appear only 2-3 times
> slower than they used to be in 3.x kernels.

If the regression shows with 4-8GB 32b systems then the priority for
fixing would be certainly much higher.

> Now I don't know the kernel internals, but if disk copies show up to be 2-3
> times slower, and the regression is in memory management, wouldn't that mean
> that the memory management is *hundreds* of times slower, to show up in disk
> writing benchmarks?

Well, it is hard to judge what the real problem is here but you have
to realize that 32b system has some fundamental issues which come from
how the memory has split between kernel (lowmem - 896MB at maximum) and
highmem. The more memory you have the more lowmem you consume by kernel
data structure. Just consider that ~160MB of this space is eaten by
struct pages to describe 16GB of memory. There are other data structures
which can only live in the low memory.

> I.e. I'm afraid that this regression doesn't affect 16+ GB RAM systems only;
> it just happens that it's clearly visible there.
> 
> And it might even affect 64bit systems with even more RAM; but I don't have
> any such system to test with.

Not really. 64b systems do not need kernel/usespace split because the
address space large enough. If there are any regressions since 3.0 then
we are certainly interested in hearing about them.
 
> root@pc:/proc/sys/vm# grep . *
> dirty_ratio:20
> highmem_is_dirtyable:0

this means that the highmem is not dirtyable and so only 20% of the free
lowmem (+ page cache in that region) is considered and writers might
get throttled quite early (this might be a really low number when the
lowmem is congested already). Do you see the same problem when enabling
highmem_is_dirtyable = 1?
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]
  Powered by Linux