Re: [PATCH RFC 7/7] mm: better document PG_reserved

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05.12.18 19:13, David Hildenbrand wrote:
> On 05.12.18 18:32, Matthew Wilcox wrote:
>> On Wed, Dec 05, 2018 at 04:05:12PM +0100, David Hildenbrand wrote:
>>> On 05.12.18 15:35, Matthew Wilcox wrote:
>>>> On Wed, Dec 05, 2018 at 01:28:51PM +0100, David Hildenbrand wrote:
>>>>> I don't see a reason why we have to document "Some of them might not even
>>>>> exist". If there is a user, we should document it. E.g. for balloon
>>>>> drivers we now use PG_offline to indicate that a page might currently
>>>>> not be backed by memory in the hypervisor. And that is independent from
>>>>> PG_reserved.
>>>>
>>>> I think you're confused by the meaning of "some of them might not even
>>>> exist".  What this means is that there might not be memory there; maybe
>>>> writes to that memory will be discarded, or maybe they'll cause a machine
>>>> check.  Maybe reads will return ~0, or 0, or cause a machine check.
>>>> We just don't know what's there, and we shouldn't try touching the memory.
>>>
>>> If there are users, let's document it. And I need more details for that :)
>>>
>>> 1. machine check: if there is a HW error, we set PG_hwpoison (except
>>> ia64 MCA, see the list)
>>>
>>> 2. Writes to that memory will be discarded
>>>
>>> Who is the user of that? When will we have such pages right now?
>>>
>>> 3. Reads will return ~0, / 0?
>>>
>>> I think this is a special case of e.g. x86? But where do we have that,
>>> are there any user?
>>
>> When there are gaps in the physical memory.  As in, if you put that
>> physical address on the bus (or in a packet), no device will respond
>> to it.  Look:
>>
>> 00000000-00000fff : Reserved
>> 00001000-00057fff : System RAM
>> 00058000-00058fff : Reserved
>> 00059000-0009dfff : System RAM
>> 0009e000-000fffff : Reserved
>>
>> Those examples I gave are examples of how various different architectures
>> respond to "no device responded to this memory access".
>>
> 
> Okay, so for this memory we will have
> a) vmmaps
> b) Memory block devices
> c) A sections that is online
> 
> So essentially "Gaps in physical memory" which is part of a online section.
> 
> This might be a candidate for PG_offline as well.
> 
> Thanks for the info, I'll try to find out how such things are handled.
> In general I assume this memory has to be readable, because otherwise
> kdump and friends would crash already when trying to dump?
> 

So I finally understood how physical memory holes in online sections are
handled when dumping. They won't be dumped because the list of dumpable
chunks (contained in /proc/kcore and after a crash /proc/vmcore) is
built using walk_system_ram_range(). So anything not listed as RAM will
be ignored.

I will update the documentation, describing that if we have an online
section that is not completely IORESOURCE_SYSTEM_RAM, that the physical
memory gaps will also be set to PG_reserved.

Trying to touch this memory is indeed dangerous, luckily dumping is
properly handled.

I'll think about if marking these ranges as PG_offline might make sense
(and if it can be easily added). Then we directly know when seeing that
page type that we should not touch it. Ever.

That hint was really helpful :)

-- 

Thanks,

David / dhildenb



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [Kernel Development]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite Info]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Samba]     [Linux Media]     [Device Mapper]

  Powered by Linux