----- Original Message ----- > Hi, > > Thank you for response. > Do you have estimated time when you will implement a command to simplified > this flow? It can help a lot! > > On the way, I ran into another problem - > As I wrote, vmcore_add_device_dump API let me store device's data > (reg’s/mem’s) into the vmcore file in dedicated "sections". > Currently my goal is to be able to load a vmcore with crash-tool, and see the > stored data of devices in “native” manner like if it was stored in RAM… > That's mean, I want to be able to access memory in the device just as access > memory in the RAM. > I’m trying to find the proper way to let such memory exploration. > Due to vmcore contains only RAM content, currently I consider to add these > extra regions to memory-map of vmcore… > Maybe I have to do that in two steps: > 1. Extract the data - follow your explanation, and get a raw data for each of > device. > 2. Push it as a different section (MEMORY type) to the vmcore. That's something that's beyond the scope of this mailing list, i.e., changing the format of the vmcore. We're pretty much a slave to whatever the dumpfile creators come up with. > > Is there a better way to do that? > Suggestions? I'm not sure if it applies to your needs, but you can use the "rd -f" option to read and display data from the dumpfile. It's similar to the normal "rd" command, except that the "address" argument is taken as a dumpfile offset. Whether any of the command's display-type options allow you to make sense of the raw data is another question. There are also other crash commands that take a "-f" option, such as the "struct" command, where again the address argument is taken as a file offset value. So if the device dump contains raw dumps of data structures, you can display them as if you were supplying a virtual address. Not sure if that helps. Dave > Thanks, > Tirtsah. > > > > -----Original Message----- > From: crash-utility-bounces@xxxxxxxxxx <crash-utility-bounces@xxxxxxxxxx> On > Behalf Of Rahul Lakkireddy > Sent: יום ד 27 מרץ 2019 17:59 > To: Dave Anderson <anderson@xxxxxxxxxx> > Cc: nirranjan@xxxxxxxxxxx; indranil@xxxxxxxxxxx; Discussion list for crash > utility usage, maintenance and development <crash-utility@xxxxxxxxxx> > Subject: Re: CONFIG_PROC_VMCORE_DEVICE_DUMP > > On Tuesday, March 03/26/19, 2019 at 20:07:09 +0530, Dave Anderson wrote: > > > > > > ----- Original Message ----- > > > > > > > > > Hi, > > > I'm enabling core dump flow on my Linux, using Linux kexec > > > mechanism. This flow ends with vmcore elf image reflecting the full > > > memory situation at the crash moment. The vmcore driver in kernel > > > that outputs the vmcore elf image has vmcore_add_device_dump API > > > that's used for pushing some extra hardware memory sections into the ELF. > > > > > > However, After vmcore file is generated, I want to see this data, > > > but don't know how to do that. I'm using crash-utility for the core > > > analysis, so I wonder - maybe there is an option to see also this > > > extra data using crash-utility? > > > > > > > > > > > > It’s very important to me, > > > > > > Thanks, > > > > > > Tirtsah. > > > > I found an old email thread (September 2018) from Rahul Lakkireddy > > w/respect to his proposal to add a new "help" option to dump the contents > > of the ELF note. > > I have added his email to the cc: list. > > > > Rahul -- are you still planning to post a crash utility patch? > > > > Yes, I still have plans to implement new command to simplify extracting the > device dumps from vmcore using crash utility. > > Meanwhile, I've been using crash, xxd, and dd to extract vmcore device dumps. > They are the first ELF notes immediately after the kdump sub header. > > 1. Get the offset_note, notes_buf, and the first NT_PRSTATUS starting > offset (i.e. pr_status_notes[0]). > > # crash vmlinux vmcore > > crash> help -D > [...] > offset_note: 4200 (0x1068) > size_note: 67121852 (0x40032bc) > notes_buf: 7f77e112d010 > num_prstatus_notes: 8 > notes[0]: 7f77e512f010 (NT_PRSTATUS) [...] > > 2. vmcore device dumps are located between the first NT_PRSTATUS and > offset_note. So, in above case, vmcore device dumps are at offset > 4200 and are of size (0x7f77e512f010 - 0x7f77e112d010) = 67117056 > > 3. Extract the vmcore device dumps using dd: > > # dd if=vmcore of=device.dump skip=4200 bs=1 count=67117056 > > 4. Parse through all the notes having note type 0x700 in the > device.dump to extract individual dumps. The first 64 bytes are the > vmcoredd_header as follows: > > /* copied from include/uapi/linux/vmcore.h */ #define VMCOREDD_NOTE_NAME > "LINUX" > #define VMCOREDD_MAX_NAME_BYTES 44 > > struct vmcoredd_header { > __u32 n_namesz; /* Name size */ > __u32 n_descsz; /* Content size */ > __u32 n_type; /* NT_VMCOREDD */ > __u8 name[8]; /* LINUX\0\0\0 */ > __u8 dump_name[VMCOREDD_MAX_NAME_BYTES]; /* Device dump's name */ }; > > By using n_namesz, n_descz, and the fixed 12 Byte ELF note header, > we can find the next vmcore device dump. > > For example: > First 64 bytes of device.dump are as follows: > > # xxd -l 64 device.dump > 0000000: 0800 0000 ec0f 0002 0007 0000 4c49 4e55 ............LINU > 0000010: 5800 0000 6378 6762 345f 3030 3030 3a30 X...cxgb4_0000:0 > 0000020: 323a 3030 2e34 0000 0000 0000 0000 0000 2:00.4.......... > 0000030: 0000 0000 0000 0000 0000 0000 0000 0000 ................ > > Converting above to vmcoredd_header, we get: > vmcoredd_header: > n_namesz = 0x00000008; > n_descsz = 0x02000fec; > n_type = 0x00000700; > name = "LINUX"; > dump_name = "cxgb4_0000:02:00.4" > > Note that the dump size including the ELF note header (i.e. > sizeof(n_namesz) + sizeof(n_descsz) + sizeof(n_type)) and > the values in n_namesz and n_descsz is only 12 + 8 + 33558508 = > 33558528 out of total 67117056 calculated in step 2. > > So, the next device dump is at: > > # xxd -l 64 -s 33558528 device.dump > 2001000: 0800 0000 ec0f 0002 0007 0000 4c49 4e55 ............LINU > 2001010: 5800 0000 6378 6762 345f 3030 3030 3a30 X...cxgb4_0000:0 > 2001020: 333a 3030 2e34 0000 0000 0000 0000 0000 3:00.4.......... > 2001030: 0000 0000 0000 0000 0000 0000 0000 0000 ................ > > And so on... > > > All of the above will be simplified with the new command. > > Thanks, > Rahul > > -- > Crash-utility mailing list > Crash-utility@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/crash-utility > --------------------------------------------------------------------- > Intel Israel (74) Limited > > This e-mail and any attachments may contain confidential material for > the sole use of the intended recipient(s). Any review or distribution > by others is strictly prohibited. If you are not the intended > recipient, please contact the sender and delete all copies. > -- Crash-utility mailing list Crash-utility@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/crash-utility