----- "Dave Anderson" <anderson@xxxxxxxxxx> wrote: > ----- "Adrien Kunysz" <adk@xxxxxxxxxx> wrote: > > > > When the vmcore is created by "snap", it first looks at /proc/iomem for > > > the regions of physical memory that are dumpable. Therefore it rejected > > > any physical address above f7feffff, which is the problem at hand. > > > > > > > f7ff0000-f7ffefff : ACPI Tables > > f7fff000-f7ffffff : ACPI Non-volatile Storage > > > > > Just for sanity's sake, what does "cat /proc/iomem" show? > > > > 00000000-00099bff : System RAMcat > > 00099c00-0009ffff : reserved > > 000a0000-000bffff : Video RAM area > > 000c0000-000c9fff : Video ROM > > 000ca000-000cafff : Adapter ROM > > 000cb000-000cbfff : Adapter ROM > > 000cc000-000ccfff : Adapter ROM > > 000cd000-000cdfff : Adapter ROM > > 000ce000-000cefff : Adapter ROM > > 000cf000-000d4bff : Adapter ROM > > 000f0000-000fffff : System ROM > > 00100000-f7feffff : System RAM > > 00100000-00318afa : Kernel code > > 00318afb-00469ec0 : Kernel data > > f7ff0000-f7ffefff : ACPI Tables > > f7fff000-f7ffffff : ACPI Non-volatile Storage > > fbf00000-fbffffff : PCI Bus #01 > > fbf80000-fbf9ffff : 0000:01:02.1 > > fbf80000-fbf9ffff : e1000 > > fbfa0000-fbfbffff : 0000:01:02.0 > > fbfa0000-fbfbffff : e1000 > > fbfc0000-fbfdffff : 0000:01:01.1 > > fbfc0000-fbfdffff : e1000 > > fbfe0000-fbffffff : 0000:01:01.0 > > fbfe0000-fbffffff : e1000 > > fc000000-fc4fffff : PCI Bus #02 > > fc4e0000-fc4effff : 0000:02:03.0 > > fc4fc000-fc4fffff : 0000:02:03.0 > > fc500000-fe5fffff : PCI Bus #03 > > fd000000-fdffffff : 0000:03:03.0 > > fe5fd000-fe5fdfff : 0000:03:00.1 > > fe5fd000-fe5fdfff : ohci_hcd > > fe5fe000-fe5fefff : 0000:03:00.0 > > fe5fe000-fe5fefff : ohci_hcd > > fe5ff000-fe5fffff : 0000:03:03.0 > > fe6fe000-fe6fefff : 0000:00:02.1 > > fe6ff000-fe6fffff : 0000:00:01.1 > > fe700000-fe7fffff : PCI Bus #01 > > fe800000-fe8fffff : PCI Bus #02 > > feafe000-feafefff : 0000:04:02.1 > > feaff000-feafffff : 0000:04:01.1 > > ff700000-ffffffff : reserved > > Apparently I don't understand /proc/iomem -- I was under the > assumption that it would show all "System RAM" segments. Is > that not the case? Damn. In RHEL4, when creating the "System RAM" resource segments, it does this: void __init e820_reserve_resources(void) { int i; for (i = 0; i < e820.nr_map; i++) { struct resource *res; if (e820.map[i].addr + e820.map[i].size > 0x100000000ULL) continue; res = alloc_bootmem_low(sizeof(struct resource)); switch (e820.map[i].type) { case E820_RAM: res->name = "System RAM"; break; case E820_ACPI: res->name = "ACPI Tables"; break; case E820_NVS: res->name = "ACPI Non-volatile Storage"; break; default: res->name = "reserved"; } ... and so doesn't bother with anything above 4GB. RHEL5 seems to accept anything: void __init e820_reserve_resources(void) { int i; for (i = 0; i < e820.nr_map; i++) { struct resource *res; res = alloc_bootmem_low(sizeof(struct resource)); switch (e820.map[i].type) { case E820_RAM: res->name = "System RAM"; break; case E820_ACPI: res->name = "ACPI Tables"; break; case E820_NVS: res->name = "ACPI Non-volatile Storage"; break; case E820_RUNTIME_CODE: res->name = "EFI runtime code"; break; default: res->name = "reserved"; } ... And upstream makes it configurable based upon CONFIG_PHYS_ADDR_T_64BIT, so it should work OK: #ifdef CONFIG_PHYS_ADDR_T_64BIT typedef u64 phys_addr_t; #else typedef u32 phys_addr_t; #endif typedef phys_addr_t resource_size_t; static struct resource __initdata *e820_res; void __init e820_reserve_resources(void) { int i; struct resource *res; u64 end; res = alloc_bootmem_low(sizeof(struct resource) * e820.nr_map); e820_res = res; for (i = 0; i < e820.nr_map; i++) { end = e820.map[i].addr + e820.map[i].size - 1; if (end != (resource_size_t)end) { res++; continue; } ... So this does appear (unless anybody can show me evidence otherwise) to be a RHEL4 (2.6.9-ish) issue only? Dave -- Crash-utility mailing list Crash-utility@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/crash-utility