[RFC v2] gcore: process core dump feature for crash utility

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello.

This is the RFC version 2 of gcore sub-command that provides a process
core dump feature for crash utility.

During the period from RFC v1, I had investigated how to restore
user-mode register values. The patch reflects the investigation.

Any comments or suggestions are welcome.


Changes in short
================

The changes include:

  1) implement a collection of user-space register values more
     appropriately, but not ideally.

  2) re-design gcore sub-command as an extension module


By (1), GDB's bt command displays backtrace normally.

diffstat ouput

 Makefile                                  |    6 +-
 defs.h                                    |    2 +
 extensions/gcore.c                        |   21 +
 extensions/gcore.mk                       |   48 +
 extensions/libgcore/2.6.34/x86_64/gcore.c | 2033 +++++++++++++++++++++++++++++
 extensions/libgcore/2.6.34/x86_64/gcore.h |  651 +++++++++
 netdump.c                                 |   27 +
 tools.c                                   |    1 -
 8 files changed, 2787 insertions(+), 2 deletions(-)

Current Status
==============

  I've continued to develop gcore sub-command, but this version is
  still under development.

  Ultimately, I'm going to implement gcore as I described in RFC v1
  and as I will explain in ``Detailed Changes and Issues'' below.


How to build and use
====================

  I've attached the patchset to this mail.

    - crash-gcore-RFCv2.patch

  Please use crash version 5.0.5 on x86_64.

  Follow the next instructions:

    $ tar xf crash-5.0.5.tar.gz
    $ cd crash-5.0.5/
    $ patch -p 1 < crash-gcore-v2.patch
    $ make
    $ make extensions
    $ crash <debuginfo> <vmcore> .... (*)
    crash> extend gcore.so

  In (*), gcore.so is generated under the extensions/ directory.


Detailed Changes and Issues
===========================

1) implement collection of user-space register values more
   appropriately, but not ideally

  The previous version doesn't retrieve appropriate register values
  because it doesn't consider save/restore operations at interrupts on
  kernel at all.

  I've added restore operations according to which kinds of interrupts
  the target task entered kernel-mode. See fill_pr_reg() in gcore.c.

  But unfortunately, the current version is still not ideal, since it
  would take some time to do.

  More precisely, all part of user-mode registers are not always
  restored. The full part is saved only at exceptions, NMI and some
  kinds of system calls. At other kinds of interrupts, saved are
  register values except for 6 callee-saved registers: rbp, rbx, r12,
  r13, r14, r15.

  In theory, these can be restored using Call Frame Information
  generated by a compiler as part of debugging information, whose
  section name is .debug_frame, which tells us offsets of respective
  callee-saved registers.

  But currently, I don't do this yet, since I don't find any useful
  library to do this. Yes, I think I can implement it manually, but it
  would take some time. I've of course found unwind_x86_32_64.c
  providing related library but it looks to me unfinished.

  On the other hand, a frame pointer, rbp, can be restored by
  unwinding it repeatedly until its address value reaches any
  user-space address.


2) re-design gcore sub-command as an extension module

In respond to my previous post, Dave gave me a suggestion that gcore
subcommand should be provided as an extension module per kernel
versions and type of architecutes, since process core dump feature
inherently depends on kernel data structures.

I agreed the suggestion and have tried to redesign the patchset.

Although the current patchset merely moved gcore files into
./extensions directory, I've also considered better design. That is,

  (1) architecture- or kernel-version independent part is provided
      just under ./extensions

  (2) only architecture- or kernel-version specific part is provided as
      certain extension module.

The next directory structure depicts this shortly:

  crash-5.0.5/
    extensions/
      gcore.mk
      gcore.c  ... (1)
      libgcore/ ... (2)
        2.6.34/
          x86_64/
            gcore_note.h
            gcore_note.c

I think it relatively easily feasible by porting regset interface in
kernel, which is used to implement ptrace feature, hiding
implementation details on many architectures.

Also, it helps port kernel codes in gcore and maintain source codes
ranging over a variety of kernel versions on multiple architectures
uniformly.

I'm going to re-implement this way in the next version. From that
version, I won't change gcore source code dramatically, change only
when adding newly extension modules.


Thanks
--
HATAYAMA Daisuke
diff --git a/Makefile b/Makefile
index cb59d8a..887b457 100644
--- a/Makefile
+++ b/Makefile
@@ -137,7 +137,11 @@ EXTENSION_SOURCE_FILES=${EXTENSIONS}/Makefile ${EXTENSIONS}/echo.c ${EXTENSIONS}
         ${EXTENSIONS}/libsial/sial_var.c \
         ${EXTENSIONS}/libsial/sial.y \
         ${EXTENSIONS}/sial.c \
-        ${EXTENSIONS}/sial.mk
+        ${EXTENSIONS}/sial.mk \
+        ${EXTENSIONS}/gcore.c \
+        ${EXTENSIONS}/gcore.mk \
+        ${EXTENSIONS}/libgcore/2.6.34/x86_64/gcore.c \
+        ${EXTENSIONS}/libgcore/2.6.34/x86_64/gcore.h
 
 DAEMON_OBJECT_FILES=remote_daemon.o va_server.o va_server_v1.o \
 	lkcd_common.o lkcd_v1.o lkcd_v2_v3.o lkcd_v5.o lkcd_v7.o lkcd_v8.o \
diff --git a/defs.h b/defs.h
index bd8d492..ef31691 100755
--- a/defs.h
+++ b/defs.h
@@ -4255,6 +4255,7 @@ int xen_minor_version(void);
 int get_netdump_arch(void);
 void *get_regs_from_elf_notes(struct task_context *);
 void map_cpus_to_prstatus(void);
+int get_x86_64_user_regs_struct_from_elf_notes(ulong task, ulong **regs);
 
 /*
  *  diskdump.c
@@ -4349,6 +4350,7 @@ int remote_memory_read(int, char *, int, physaddr_t);
  *  gnu_binutils.c
  */
 
+
 /* NO LONGER IN USE */
 
 /*
diff --git a/extensions/gcore.c b/extensions/gcore.c
new file mode 100644
index 0000000..6f8b7f3
--- /dev/null
+++ b/extensions/gcore.c
@@ -0,0 +1,21 @@
+/* gcore.c -- core analysis suite
+ *
+ * Copyright (C) 2010 FUJITSU LIMITED
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+/*
+ * This is a dummy file.
+ *
+ * See ./extensions/Makefile to see why this file is needed.
+ *
+ */
diff --git a/extensions/gcore.mk b/extensions/gcore.mk
new file mode 100644
index 0000000..dd88510
--- /dev/null
+++ b/extensions/gcore.mk
@@ -0,0 +1,48 @@
+#
+# Copyright (C) 2010 FUJITSU LIMITED
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+
+ifeq ($(shell arch), i686)
+  TARGET=X86
+  TARGET_CFLAGS=-D_FILE_OFFSET_BITS=64
+endif
+ifeq ($(shell arch), ppc64)
+  TARGET=PPC64
+  TARGET_CFLAGS=-m64
+endif
+ifeq ($(shell arch), ia64)
+  TARGET=IA64
+  TARGET_CFLAGS=
+endif
+ifeq ($(shell arch), x86_64)
+  TARGET=X86_64
+  TARGET_CFLAGS=
+endif
+
+ifeq ($(shell /bin/ls /usr/include/crash/defs.h 2>/dev/null), /usr/include/crash/defs.h)
+  INCDIR=/usr/include/crash
+endif
+ifeq ($(shell /bin/ls ../defs.h 2> /dev/null), ../defs.h)
+  INCDIR=..
+endif
+ifeq ($(shell /bin/ls ./defs.h 2> /dev/null), ./defs.h)
+  INCDIR=.
+endif
+
+KERNEL_VERSION=2.6.34
+GCORE_CFILE=./libgcore/$(KERNEL_VERSION)/$(shell arch)/gcore.c
+
+all: gcore.so
+	
+gcore.so: $(INCDIR)/defs.h $(GCORE_CFILE)
+	gcc -Wall -I$(INCDIR) -nostartfiles -shared -rdynamic -o gcore.so $(GCORE_CFILE) -fPIC -D$(TARGET) $(TARGET_CFLAGS)
diff --git a/extensions/libgcore/2.6.34/x86_64/gcore.c b/extensions/libgcore/2.6.34/x86_64/gcore.c
new file mode 100644
index 0000000..3eacdb5
--- /dev/null
+++ b/extensions/libgcore/2.6.34/x86_64/gcore.c
@@ -0,0 +1,2033 @@
+/* gcore.c -- core analysis suite
+ *
+ * Copyright (C) 2010 FUJITSU LIMITED
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#ifdef X86_64
+#include "defs.h"
+#include "gcore.h"
+#include <elf.h>
+
+int _init(void);
+int _fini(void);
+
+void cmd_gcore(void);
+char *help_gcore[];
+
+static struct command_table_entry command_table[] = {
+	{ "gcore", cmd_gcore, help_gcore, REFRESH_TASK_TABLE },    
+	{ NULL }                               
+};
+
+static int thread_group_cputime(struct task_cputime *times, struct thread_group_list *tglist);
+static int pid_nr_ns(ulong pid, ulong ns, pid_t *nr);
+static int pid_alive(ulong task, int *alive);
+static int __task_pid_nr_ns(ulong task, enum pid_type type, pid_t *nr);
+static inline int task_pid_vnr(ulong task, pid_t *nr);
+static inline int task_pgrp_vnr(ulong task, pid_t *nr);
+static inline int task_session_vnr(ulong task, pid_t *nr);
+static int fill_prstatus(struct elf_prstatus *prstatus, ulong task, ulong signr, struct thread_group_list *tglist);
+static int fill_auxv_note(struct memelfnote *note, ulong task);
+#ifdef X86_64
+static int ioperm_active(ulong task, int *active);
+static int fill_ioperm(ulong task, void *data);
+static int tsk_used_math(ulong task, int *onbit);
+static int boot_cpu_has(int feature, int *has);
+static inline int cpu_has_xsave(int *has);
+static inline int cpu_has_fxsr(int *has);
+static inline int cpu_has_xmm(int *has);
+static inline int have_hwfp(int *have);
+static int task_used_fpu(ulong task, int *used);
+static int init_fpu(ulong task, union thread_xstate *xstate);
+static int fill_xstate(ulong task, union thread_xstate *xstate);
+static int fpregs_active(ulong task, int *active);
+static int xfpregs_active(ulong task, int *active);
+static int fill_xfpregs(ulong task, union thread_xstate *xstate);
+static inline int xstateregs_active(ulong task, int *active);
+#endif /* X86_64 */
+static int task_nice(ulong task, int *nice);
+static int fill_psinfo(struct elf_prpsinfo *psinfo, ulong task);
+static int notesize(struct memelfnote *en);
+static void fill_note(struct memelfnote *note, const char *name, int type, unsigned int sz, void *data);
+static int alignfile(int fd, off_t *foffset);
+static int writenote(struct memelfnote *men, int fd, off_t *foffset);
+static int test_tsk_thread_flag(ulong task, int bit, int *bool);
+static int get_desc_base(ulong desc, ulong *base);
+static int get_pt_regs_from_stacktop(ulong task, struct pt_regs *regs);
+static int fill_pr_reg(ulong task, elf_gregset_t pr_reg);
+static int fill_thread_core_info(struct elf_thread_core_info *t, long signr, size_t *total, struct thread_group_list *tglist);
+static int fill_note_info(struct elf_note_info *info, long signr, struct thread_group_list *tglist);
+static int write_note_info(int fd, struct elf_note_info *info, off_t *foffset);
+static size_t get_note_info_size(struct elf_note_info *info);
+static void free_note_info(struct elf_note_info *info);
+static int vma_dump_size(ulong vma, ulong *size);
+static int fill_thread_group(struct thread_group_list **tglist, const struct task_context *tc);
+static void free_thread_group(struct thread_group_list *tglist);
+static int format_corename(char *corename, struct task_context *tc);
+static void fill_headers(Elf_Ehdr *elf, Elf_Shdr *shdr0, int segs);
+static ulong next_vma(ulong this_vma);
+static int write_elf_note_phdr(int fd, size_t size, off_t *offset);
+static void do_gcore(struct task_context *tc);
+
+int 
+_init(void) /* Register the command set. */
+{ 
+        register_extension(command_table);
+	return 1;
+}
+ 
+int 
+_fini(void) 
+{ 
+	return 1;
+}
+
+char *help_gcore[] = {
+"gcore",
+"gcore - retrieve a process image as a core dump",
+"[-v vlevel] [-d] [-f filter] [pid | taskp]",
+"  This command retrieves a process image as a core dump.",
+"  ",
+"    -v displays verbose information according to vlevel. Its kinds are as ",
+"       follows:",
+"       (vlevel 1) Progress information",
+"       (vlevel 2) Progress information and access information during gcore",
+"                  command's process.",
+"  ",
+"    -d displays the current filter value.",
+"  ",
+"    -f controls kinds of memory maps written into generated core dumps. The",
+"       number means in bitwise:",
+"  ",
+"           AP  AS  FP  FS  ELF HP  HS",
+"       ------------------------------",
+"         0",
+"         1  x",
+"         2      x",
+"         4          x",
+"         8              x",
+"        16          x       x",
+"        32                      x",
+"        64                          x",
+"       127  x   x   x   x   x   x   x",
+" ",
+"        AP  Anonymous Private Memory",
+"        AS  Anonymous Shared Memory",
+"        FP  File-Backed Private Memory",
+"        FS  File-Backed Shared Memory",
+"        ELF ELF header pages in file-backed private memory areas",
+"        HP  Hugetlb Private Memory",
+"        HS  Hugetlb Shared Memory",
+"  ",
+"  If no pid or taskp is specified, gcore tries to retrieve the process image",
+"  of the current task context.",
+"  ",
+"  The file name of a generated core dump is core.<pid> where pid is PID of",
+"  the specified process.",
+"  ",
+"  For a multi-thread process, gcore generates a core dump containing"
+"  information for all threads, which is similar to a behaviour of the ELF"
+"  core dumper in Linux kernel.",
+"  ",
+"  Notice the difference of PID on between crash and linux that ps command in",
+"  crash utility displays LWP, while ps command in Linux thread group tid,",
+"  precisely PID of the thread group leader.",
+"  ",
+"  gcore provides core dump filtering facility to allow users to select what",
+"  kinds of memory maps to be included in the resulting core dump. There are",
+"  7 kinds memory maps in total, and you can set it up with set command.",
+"  For more detailed information, please see a help command message.",
+"  ",
+"EXAMPLES",
+"  Specify the process you want to retrieve as a core dump. Here assume the",
+"  process with PID 12345.",
+"  ",
+"    crash> gcore 12345",
+"    Saved core.12345",
+"  ",
+"  Next, specify by TASK. Here assume the process placing at the address",
+"  f9d7000 with PID 32323.",
+"  ",
+"    crash> gcore f9d78000",
+"    Saved core.32323",
+"  ",
+"  If no argument is given, gcore tries to retrieve the process of the current",
+"  task context.",
+"  ",
+"    crash> set",
+"         PID: 54321",
+"     COMMAND: \"bash\"",
+"        TASK: e0000040f80c0000",
+"         CPU: 0",
+"       STATE: TASK_INTERRUPTIBLE",
+"    crash> gcore",
+"    Saved core.54321",
+"  ",
+"  When a multi-thread process is specified, the generated core file name has",
+"  the thread leader's PID.",
+"  ",
+"    crash> gcore 12345",
+"    Saved core.12340",
+NULL,
+};
+
+static int gcore_filter = GCORE_FILTER_DEFAULT;
+
+int
+gcore_filter_set(int filter)
+{
+	if (filter < 0 || filter > 127)
+		return 0;
+
+	gcore_filter = filter;
+
+	return 1;
+}
+
+int
+gcore_filter_get(void)
+{
+	return gcore_filter;
+}
+
+enum verbose_level
+{
+	VERBOSE_DEFAULT,
+	VERBOSE_PROGRESS,
+	VERBOSE_NONQUIET,
+};
+
+/*
+ * I define verbose and gcore_error_handle as global variables bacause
+ * functions handling these exist around gcore implementation. Passing
+ * them to every functions as arguments is considerably awkward.
+ */
+static enum verbose_level verbose;
+
+static ulong gcore_error_handle;
+
+#define verbose23f(...)							\
+	({								\
+		int b = 0;						\
+		if (verbose != VERBOSE_DEFAULT) {			\
+			b = error(INFO, __VA_ARGS__);			\
+		}							\
+		b;							\
+	})
+
+#define verbose3f(...)							\
+	({								\
+		int b = 0;						\
+		if (verbose == VERBOSE_NONQUIET) {			\
+			b = error(INFO, __VA_ARGS__);			\
+		}							\
+		b;							\
+	})
+
+void
+cmd_gcore(void)
+{
+	char c;
+
+	verbose = VERBOSE_DEFAULT;
+
+	while ((c = getopt(argcnt, args, "df:v:")) != EOF) {
+		switch (c) {
+		case 'd':
+			fprintf(fp, "%d\n", gcore_filter_get());
+			return;
+
+		case 'f': {
+			ulong value;
+
+			if (!decimal(optarg, 0))
+				goto invalid_filter_value;
+
+			value = stol(optarg, FAULT_ON_ERROR, NULL);
+			if (!gcore_filter_set(value))
+				goto invalid_filter_value;
+
+			break;
+
+			invalid_filter_value:
+			error(INFO, "invalid filter value: %s.\n", optarg);
+			goto inc_argerrs;
+
+		}
+		case 'v': {
+			ulong value;
+
+			if (!decimal(optarg, 0) ||
+			    (value = stol(optarg, gcore_error_handle, NULL)) > 2)
+				goto invalid_verbose_level;
+
+			switch (value) {
+			case 0: verbose = VERBOSE_DEFAULT; break;
+			case 1: verbose = VERBOSE_PROGRESS; break;
+			case 2: verbose = VERBOSE_NONQUIET; break;
+			}
+
+			break;
+		}
+			invalid_verbose_level:
+			error(INFO, "invalid verbose level: %s.\n", optarg);
+		default:
+		inc_argerrs:
+			argerrs++;
+			break;
+		}
+	}
+
+	if (argerrs) {
+		cmd_usage(pc->curcmd, SYNOPSIS);
+	}
+
+	switch (verbose) {
+	case VERBOSE_DEFAULT:
+	case VERBOSE_PROGRESS:
+		gcore_error_handle = RETURN_ON_ERROR | QUIET;
+		break;
+	case VERBOSE_NONQUIET:
+		gcore_error_handle = RETURN_ON_ERROR;
+		break;
+	}
+
+	if (!args[optind]) {
+		do_gcore(CURRENT_CONTEXT());
+		return;
+	}
+
+	if (decimal(args[optind], 0)) {
+		struct task_context *tc;
+		ulong value;
+
+		switch (str_to_context(args[optind], &value, &tc)) {
+		case STR_PID:
+			tc = pid_to_context(value);
+			do_gcore(tc);
+			break;
+		case STR_TASK:
+			tc = task_to_context(value);
+			do_gcore(tc);
+			break;
+		case STR_INVALID:
+			error(INFO, "invalid task or pid: %s\n\n",
+			      args[optind]);
+			break;
+		}
+	} else {
+		error(INFO,
+		      "invalid task or¨ pid: %s\n\n",
+		      args[optind]);
+	}
+}
+
+static int
+thread_group_cputime(struct task_cputime *times,
+		     struct thread_group_list *tglist)
+{
+	ulong group_leader, signal, utime, signal_utime, stime,
+		signal_stime;
+
+	group_leader = tglist->task;
+
+	if (!readmem(group_leader + OFFSET(task_struct_signal), KVADDR,
+		     &signal, sizeof(signal),
+		     "thread_group_cputime: signal", gcore_error_handle))
+		return 0;
+
+	if (!readmem(group_leader + OFFSET(task_struct_utime), KVADDR,
+		     &utime, sizeof(utime),
+		     "thread_group_cputime: utime", gcore_error_handle))
+		return 0;
+
+	if (!readmem(group_leader + OFFSET(task_struct_stime), KVADDR,
+		     &stime, sizeof(stime),
+		     "thread_group_cputime: stime", gcore_error_handle))
+		return 0;
+
+	if (!readmem(signal + MEMBER_OFFSET("signal_struct", "utime"),
+		     KVADDR, &signal_utime, sizeof(signal_utime),
+		     "thread_group_cputime: signal_utime",
+		     gcore_error_handle))
+		return 0;
+
+	if (!readmem(signal + MEMBER_OFFSET("signal_struct", "stime"),
+		     KVADDR, &signal_stime, sizeof(signal_stime),
+		     "thread_group_cputime: signal_stime",
+		     gcore_error_handle))
+		return 0;
+
+	times->utime = cputime_add(utime, signal_utime);
+	times->stime = cputime_add(stime, signal_stime);
+	times->sum_exec_runtime = 0;
+
+	return 1;
+}
+
+static int
+pid_nr_ns(ulong pid, ulong ns, pid_t *nr)
+{
+	ulong upid;
+	unsigned int ns_level, pid_level;
+	pid_t ret_nr = 0;
+
+	if (!readmem(ns + MEMBER_OFFSET("pid_namespace", "level"), KVADDR,
+		     &ns_level, sizeof(ns_level), "pid_nr_ns: ns_level",
+		     gcore_error_handle))
+		return 0;
+
+	if (!readmem(pid + MEMBER_OFFSET("pid", "level"), KVADDR,
+		     &pid_level, sizeof(pid_level), "pid_nr_ns: pid_level",
+		     gcore_error_handle))
+		return 0;
+
+        if (pid && ns_level <= pid_level) {
+		ulong upid_ns;
+
+		upid = pid + OFFSET(pid_numbers) + SIZE(upid) * ns_level;
+
+		if (!readmem(upid + OFFSET(upid_ns), KVADDR, &upid_ns,
+			     sizeof(upid_ns), "pid_nr_ns: upid_ns",
+			     gcore_error_handle))
+			return 0;
+
+		if (upid_ns == ns) {
+			ulong upid_nr;
+
+			if (!readmem(upid + OFFSET(upid_nr), KVADDR, &upid_nr,
+				     sizeof(upid_nr), "pid_nr_ns: upid_nr",
+				     gcore_error_handle))
+				return 0;
+			ret_nr = upid_nr;
+		}
+        }
+
+	*nr = ret_nr;
+        return 1;
+}
+
+/**
+ * pid_alive - check that a task structure is not stale
+ * @p: Task structure to be checked.
+ *
+ * Test if a process is not yet dead (at most zombie state)
+ * If pid_alive fails, then pointers within the task structure
+ * can be stale and must not be dereferenced.
+ */
+static int
+pid_alive(ulong task, int *alive)
+{
+	pid_t pid;
+
+	if (!readmem(task + OFFSET(task_struct_pids) +
+		     PIDTYPE_PID * SIZE(pid_link) + OFFSET(pid_link_pid),
+		     KVADDR, &pid, sizeof(pid), "pid_alive", gcore_error_handle))
+		return 0;
+
+	*alive = !!pid;
+
+        return 1;
+}
+
+static int
+__task_pid_nr_ns(ulong task, enum pid_type type, pid_t *nr)
+{
+	ulong nsproxy, ns;
+	pid_t ret_nr = 0;
+	int alive;
+
+	if (!readmem(task + OFFSET(task_struct_nsproxy), KVADDR, &nsproxy,
+		     sizeof(nsproxy), "__task_pid_nr_ns: nsproxy",
+		     gcore_error_handle))
+		return 0;
+
+	if (!readmem(nsproxy + MEMBER_OFFSET("nsproxy", "pid_ns"), KVADDR, &ns,
+		     sizeof(ns), "__task_pid_nr_ns: ns", gcore_error_handle))
+		return 0;
+
+	if (!pid_alive(task, &alive))
+		return 0;
+
+	else if (alive) {
+		ulong pids_type_pid;
+
+                if (type != PIDTYPE_PID) {
+			ulong group_leader;
+
+			if (!readmem(task + MEMBER_OFFSET("task_struct",
+							  "group_leader"),
+				     KVADDR, &group_leader, sizeof(group_leader),
+				     "__task_pid_nr_ns: group_leader",
+				     gcore_error_handle))
+				return 0;
+
+                        task = group_leader;
+		}
+
+		if (!readmem(task + OFFSET(task_struct_pids) +
+			     type * SIZE(pid_link) + OFFSET(pid_link_pid),
+			     KVADDR, &pids_type_pid, sizeof(pids_type_pid),
+			     "__task_pid_nr_ns: pids_type_pid",
+			     gcore_error_handle))
+			return 0;
+
+		if (!pid_nr_ns(pids_type_pid, ns, &ret_nr))
+			return 0;
+        }
+
+	*nr = ret_nr;
+        return 1;
+}
+
+static inline int
+task_pid_vnr(ulong task, pid_t *nr)
+{
+	return __task_pid_nr_ns(task, PIDTYPE_PID, nr);
+}
+
+static inline int
+task_pgrp_vnr(ulong task, pid_t *nr)
+{
+        return __task_pid_nr_ns(task, PIDTYPE_PGID, nr);
+}
+
+static inline int
+task_session_vnr(ulong task, pid_t *nr)
+{
+        return __task_pid_nr_ns(task, PIDTYPE_SID, nr);
+}
+
+static int
+fill_prstatus(struct elf_prstatus *prstatus, ulong task, ulong signr,
+	      struct thread_group_list *tglist)
+{
+	ulong pending_signal_sig0, blocked_sig0, real_parent, group_leader,
+		signal, cutime,	cstime;
+	pid_t ppid, pid, pgrp, sid;
+
+        /* The type of (sig[0]) is unsigned long. */
+	if (!readmem(task + OFFSET(task_struct_pending) +
+		     OFFSET(sigpending_signal), KVADDR, &pending_signal_sig0,
+		     sizeof(unsigned long),
+		     "fill_prstatus: sigpending_signal_sig", gcore_error_handle))
+		return 0;
+
+	if (!readmem(task + OFFSET(task_struct_blocked), KVADDR, &blocked_sig0,
+		     sizeof(unsigned long), "fill_prstatus: blocked_sig0",
+		     gcore_error_handle))
+		return 0;
+
+	if (!readmem(task + OFFSET(task_struct_parent), KVADDR, &real_parent,
+		     sizeof(real_parent), "fill_prstatus: real_parent",
+		     gcore_error_handle))
+		return 0;
+
+	if (!readmem(task + MEMBER_OFFSET("task_struct", "group_leader"),
+		     KVADDR, &group_leader, sizeof(group_leader),
+		     "fill_prstatus: group_leader", gcore_error_handle))
+		return 0;
+
+	if (!task_pid_vnr(real_parent, &ppid))
+		return 0;
+
+	if (!task_pid_vnr(task, &pid))
+		return 0;
+
+	if (!task_pgrp_vnr(task, &pgrp))
+		return 0;
+
+	if (!task_session_vnr(task, &sid))
+		return 0;
+
+	prstatus->pr_info.si_signo = prstatus->pr_cursig = signr;
+        prstatus->pr_sigpend = pending_signal_sig0;
+        prstatus->pr_sighold = blocked_sig0;
+        prstatus->pr_ppid = ppid;
+        prstatus->pr_pid = pid;
+        prstatus->pr_pgrp = pgrp;
+        prstatus->pr_sid = sid;
+        if (task == group_leader) {
+                struct task_cputime cputime;
+
+                /*
+                 * This is the record for the group leader.  It shows the
+                 * group-wide total, not its individual thread total.
+                 */
+                thread_group_cputime(&cputime, tglist);
+                cputime_to_timeval(cputime.utime, &prstatus->pr_utime);
+                cputime_to_timeval(cputime.stime, &prstatus->pr_stime);
+        } else {
+		cputime_t utime, stime;
+
+		if (!readmem(task + OFFSET(task_struct_utime), KVADDR, &utime,
+			     sizeof(utime), "task_struct utime",
+			     gcore_error_handle))
+			return 0;
+
+		if (!readmem(task + OFFSET(task_struct_stime), KVADDR, &stime,
+			     sizeof(stime), "task_struct stime",
+			     gcore_error_handle))
+			return 0;
+
+                cputime_to_timeval(utime, &prstatus->pr_utime);
+                cputime_to_timeval(stime, &prstatus->pr_stime);
+        }
+
+	if (!readmem(task + OFFSET(task_struct_signal), KVADDR, &signal,
+		     sizeof(signal), "task_struct signal", gcore_error_handle))
+		return 0;
+
+	if (!readmem(task + MEMBER_OFFSET("signal_struct", "cutime"), KVADDR,
+		     &cutime, sizeof(cutime), "signal_struct cutime",
+		     gcore_error_handle))
+		return 0;
+
+	if (!readmem(task + MEMBER_OFFSET("signal_struct", "cutime"), KVADDR,
+		     &cstime, sizeof(cstime), "signal_struct cstime",
+		     gcore_error_handle))
+		return 0;
+
+        cputime_to_timeval(cutime, &prstatus->pr_cutime);
+        cputime_to_timeval(cstime, &prstatus->pr_cstime);
+
+	return 1;
+}
+
+static int
+fill_auxv_note(struct memelfnote *note, ulong task)
+{
+	ulong *auxv;
+	int i;
+
+	auxv = malloc(MEMBER_SIZE("mm_struct", "saved_auxv"));
+	if (!auxv)
+		return 0;
+
+	if (!readmem(task_mm(task, FALSE) +
+		     MEMBER_OFFSET("mm_struct", "saved_auxv"), KVADDR, auxv,
+		     MEMBER_SIZE("mm_struct", "saved_auxv"), "fill_auxv_note",
+		     gcore_error_handle)) {
+		free(auxv);
+		return 0;
+	}
+
+	i = 0;
+	do
+		i += 2;
+	while (auxv[i] != AT_NULL);
+
+	fill_note(note, "CORE", NT_AUXV, i * sizeof(ulong), auxv);
+
+	return 1;
+}
+
+#ifdef X86_64
+static int
+ioperm_active(ulong task, int *active)
+{
+	ulong addr, io_bitmap_max;
+
+	addr = task + OFFSET(task_struct_thread) +
+		MEMBER_OFFSET("thread_struct", "io_bitmap_max");
+
+	if (!readmem(addr, KVADDR, &io_bitmap_max, sizeof(io_bitmap_max),
+		     "readmem_io_bitmap_max", gcore_error_handle))
+		return 0;
+
+	*active = io_bitmap_max / sizeof(long);
+
+	return 1;
+}
+
+static int
+fill_ioperm(ulong task, void *data)
+{
+	ulong addr, io_bitmap_ptr;
+
+	addr = task + OFFSET(task_struct_thread) +
+		MEMBER_OFFSET("thread_struct", "io_bitmap_ptr");
+
+	if (!readmem(addr, KVADDR, &io_bitmap_ptr, sizeof(io_bitmap_ptr),
+		     "fill_ioperm: io_bitmap_ptr", gcore_error_handle))
+		return 0;
+
+	if (!io_bitmap_ptr) {
+		verbose3f("I/O bitmap is missing. ioperm() might have work well.\n");
+		return 0;
+	}
+
+	if (!readmem(io_bitmap_ptr, KVADDR, data,
+		     IO_BITMAP_LONGS * sizeof(long),
+		     "fill_ioperm: dereference io_bitmap_ptr",
+		     gcore_error_handle))
+		return 0;
+
+	return 1;
+}
+
+/* NOTE: this will return 0 or PF_USED_MATH, it will never return 1 */
+static int
+tsk_used_math(ulong task, int *onbit)
+{
+	ulong flags;
+
+	if (!readmem(task + OFFSET(task_struct_flags), KVADDR, &flags,
+		     sizeof(flags), "tsk_used_math", gcore_error_handle))
+		return 0;
+
+	*onbit = flags & PF_USED_MATH;
+
+	return 1;
+}
+
+static int
+boot_cpu_has(int feature, int *has)
+{
+	u32 x86_capability[NCAPINTS];
+
+	if (!symbol_exists("boot_cpu_data"))
+		return 0;
+
+	if (!readmem(symbol_value("boot_cpu_data") +
+		     MEMBER_OFFSET("cpuinfo_x86", "x86_capability"), KVADDR,
+		     &x86_capability, sizeof(x86_capability), "cpu_has_xsave",
+		     gcore_error_handle))
+		return 0;
+
+	*has = ((1UL << (feature % 32)) & x86_capability[feature / 32]) != 0;
+
+	return 1;
+}
+
+static inline int
+cpu_has_xsave(int *has)
+{
+	return boot_cpu_has(X86_FEATURE_XSAVE, has);
+}
+
+static inline int
+cpu_has_fxsr(int *has)
+{
+	return boot_cpu_has(X86_FEATURE_FXSR, has);
+}
+
+static inline int
+cpu_has_xmm(int *has)
+{
+	return boot_cpu_has(X86_FEATURE_XMM, has);
+}
+
+static inline int
+have_hwfp(int *have)
+{
+	*have = 1;
+
+	return 1;
+}
+
+static int
+task_used_fpu(ulong task, int *used)
+{
+	ulong status;
+
+	if (!readmem(task_to_context(task)->thread_info +
+		     MEMBER_OFFSET("thread_info", "status"), KVADDR,
+		     &status, sizeof(u32), "task_used_fpu: status",
+		     gcore_error_handle))
+		return 0;
+
+	*used = status & TS_USEDFPU;
+
+	return 1;
+}
+
+/*
+ * The _current_ task is using the FPU for the first time
+ * so initialize it and set the mxcsr to its default
+ * value at reset if we support XMM instructions and then
+ * remember the current task has used the FPU.
+ */
+static int
+init_fpu(ulong task, union thread_xstate *xstate)
+{
+	int used_math, has;
+	size_t xstate_size;
+
+        if (!tsk_used_math(task, &used_math))
+		return 0;
+	else if (used_math) {
+		if (is_task_active(task)) {
+			int used_fpu;
+
+			if (!task_used_fpu(task, &used_fpu))
+				return 0;
+			else if (used_fpu) {
+				/*
+				 * The FPU values contained within
+				 * thread->xstate may differ from what
+				 * was contained at crash timing, but
+				 * crash dump cannot restore the
+				 * runtime FPU state, here I only warn
+				 * that.
+				 */
+				error(INFO, "FPU could not be the latest.");
+			}
+		}
+		return 1;
+        }
+
+	xstate_size = symbol_value("xstate_size");
+
+        if (!cpu_has_fxsr(&has))
+		return 0;
+	else if (has) {
+                struct i387_fxsave_struct *fx = &xstate->fxsave;
+
+                memset(fx, 0, xstate_size);
+                fx->cwd = 0x37f;
+		if (!cpu_has_xmm(&has))
+			return 0;
+		else if (has)
+                        fx->mxcsr = MXCSR_DEFAULT;
+        } else {
+                struct i387_fsave_struct *fp = &xstate->fsave;
+                memset(fp, 0, xstate_size);
+                fp->cwd = 0xffff037fu;
+                fp->swd = 0xffff0000u;
+                fp->twd = 0xffffffffu;
+                fp->fos = 0xffff0000u;
+        }
+
+        return 1;
+}
+
+static int
+fill_xstate(ulong task, union thread_xstate *xstate)
+{
+        int has;
+	ulong xstate_fx_sw_bytes;
+
+        if (!cpu_has_xsave(&has))
+		return 0;
+	else if (!has)
+		return 1;
+
+	if (!readmem(task + OFFSET(task_struct_thread) +
+		     MEMBER_OFFSET("thread_struct", "xstate"), KVADDR,
+		     xstate, sizeof(union thread_xstate), "fill_xstate: thread",
+		     gcore_error_handle))
+		return 0;
+
+        if (!init_fpu(task, xstate))
+		return 0;
+
+	/*
+	 * XXX:
+	 *
+	 * I assume that cpu_has_xsave() implies the existence of
+	 * xstate_fx_sw_byte. Is it right?
+	 */
+	xstate_fx_sw_bytes = symbol_value("xstate_fx_sw_bytes");
+
+        /*
+         * Copy the 48bytes defined by the software first into the xstate
+         * memory layout in the thread struct, so that we can copy the entire
+         * xstateregs to the user using one user_regset_copyout().
+         */
+	if (!readmem(xstate_fx_sw_bytes, KVADDR, &xstate->fxsave.sw_reserved,
+		     USER_XSTATE_FX_SW_WORDS * sizeof(u64),
+		     "fill_xstate: sw_reserved", gcore_error_handle))
+		return 0;
+
+        return 1;
+}
+
+/*
+ * fpregs_active() and xfpregs_active() behaves differently from those
+ * in Linux kernel in that two functions here return boolean while the
+ * counterparts return the number of register set; the number is a
+ * special case of boolean.
+ *
+ * Here we don't need general number, sufficing to gurantee only
+ * existence.
+ */
+static int
+fpregs_active(ulong task, int *active)
+{
+	int used;
+
+	if (!tsk_used_math(task, &used))
+		return 0;
+
+	*active = used;
+
+	return 1;
+}
+
+static int
+xfpregs_active(ulong task, int *active)
+{
+	int has, used;
+
+	if (!cpu_has_fxsr(&has))
+		return 0;
+
+	if (!tsk_used_math(task, &used))
+		return 0;
+
+	*active = has && used;
+
+	return 1;
+}
+
+static int
+fill_xfpregs(ulong task, union thread_xstate *xstate)
+{
+	if (!readmem(task + OFFSET(task_struct_thread) +
+		     MEMBER_OFFSET("thread_struct", "xstate"), KVADDR,
+		     xstate, sizeof(union thread_xstate), "fill_xstate: thread",
+		     gcore_error_handle))
+		return 0;
+
+        if (!init_fpu(task, xstate))
+		return 0;
+
+        return 1;
+}
+
+static inline int
+xstateregs_active(ulong task, int *active)
+{
+	return fpregs_active(task, active);
+}
+#endif /* X86_64 */
+
+static int
+task_nice(ulong task, int *nice)
+{
+	int static_prio;
+
+	if (!readmem(task + MEMBER_OFFSET("task_struct", "static_prio"),
+		     KVADDR, &static_prio, sizeof(static_prio),
+		     "task_nice: static_prio", gcore_error_handle))
+		return 0;
+
+	*nice = PRIO_TO_NICE(static_prio);
+
+	return 1;
+}
+
+static int
+fill_psinfo(struct elf_prpsinfo *psinfo, ulong task)
+{
+	ulong arg_start, arg_end, real_parent, cred;
+	pid_t ppid, pid, pgrp, sid;
+	int nice;
+	long state;
+        unsigned int i, len, flags;
+	uid_t uid, cred_uid;
+	gid_t gid, cred_gid;
+	char *mm_cache;
+
+        /* first copy the parameters from user space */
+        memset(psinfo, 0, sizeof(struct elf_prpsinfo));
+
+	mm_cache = fill_mm_struct(task_mm(task, FALSE));
+	if (!mm_cache)
+		return 0;
+
+	arg_start = ULONG(mm_cache + MEMBER_OFFSET("mm_struct", "arg_start"));
+	arg_end = ULONG(mm_cache + MEMBER_OFFSET("mm_struct", "arg_end"));
+
+        len = arg_end - arg_start;
+        if (len >= ELF_PRARGSZ)
+                len = ELF_PRARGSZ-1;
+	if (!readmem(arg_start, UVADDR, &psinfo->pr_psargs, len,
+		     "fill_psinfo: pr_psargs", gcore_error_handle))
+		return 0;
+        for(i = 0; i < len; i++)
+                if (psinfo->pr_psargs[i] == 0)
+                        psinfo->pr_psargs[i] = ' ';
+        psinfo->pr_psargs[len] = 0;
+
+	if (!readmem(task + MEMBER_OFFSET("task_struct", "real_parent"), KVADDR,
+		     &real_parent, sizeof(real_parent),
+		     "fill_psinfo: real_parent", gcore_error_handle))
+		return 0;
+
+	if (!task_pid_vnr(real_parent, &ppid))
+		return 0;
+
+	if (!task_pid_vnr(task, &pid))
+		return 0;
+
+	if (!task_pgrp_vnr(task, &pgrp))
+		return 0;
+
+	if (!task_session_vnr(task, &sid))
+		return 0;
+
+        psinfo->pr_ppid = ppid;
+        psinfo->pr_pid = pid;
+        psinfo->pr_pgrp = pgrp;
+        psinfo->pr_sid = sid;
+
+	if (!readmem(task + OFFSET(task_struct_state), KVADDR, &state,
+		     sizeof(state), "fill_psinfo: state", gcore_error_handle))
+		return 0;
+
+        i = state ? ffz(~state) + 1 : 0;
+        psinfo->pr_state = i;
+        psinfo->pr_sname = (i > 5) ? '.' : "RSDTZW"[i];
+        psinfo->pr_zomb = psinfo->pr_sname == 'Z';
+
+	if (!task_nice(task, &nice))
+		return 0;
+	else
+		psinfo->pr_nice = nice;
+
+	if (!readmem(task + OFFSET(task_struct_flags), KVADDR, &flags,
+		     sizeof(flags), "fill_psinfo: flags", gcore_error_handle))
+		return 0;
+
+        psinfo->pr_flag = flags;
+
+	if (!readmem(task + MEMBER_OFFSET("task_struct", "real_cred"),
+		     KVADDR, &cred, sizeof(cred),
+		     "fill_psinfo: real_cred", gcore_error_handle))
+		return 0;
+
+	if (!readmem(cred + MEMBER_OFFSET("cred", "uid"), KVADDR,
+		     &cred_uid, sizeof(cred_uid),
+		     "fill_psinfo: cred_uid", gcore_error_handle))
+		return 0;
+
+	if (!readmem(cred + MEMBER_OFFSET("cred", "gid"), KVADDR,
+		     &cred_gid, sizeof(cred_gid),
+		     "fill_psinfo: cred_gid", gcore_error_handle))
+		return 0;
+
+	uid = cred_uid;
+	gid = cred_gid;
+
+	SET_UID(psinfo->pr_uid, uid);
+	SET_GID(psinfo->pr_gid, gid);
+
+	if (!readmem(task + OFFSET(task_struct_comm), KVADDR, &psinfo->pr_fname,
+		     TASK_COMM_LEN, "fill_psinfo: comm", gcore_error_handle))
+		return 0;
+
+        return 1;
+}
+
+static int
+notesize(struct memelfnote *en)
+{
+        int sz;
+
+        sz = sizeof(Elf_Nhdr);
+        sz += roundup(strlen(en->name) + 1, 4);
+        sz += roundup(en->datasz, 4);
+
+        return sz;
+}
+
+static void
+fill_note(struct memelfnote *note, const char *name, int type, unsigned int sz,
+	  void *data)
+{
+        note->name = name;
+        note->type = type;
+	note->datasz = sz;
+        note->data = data;
+        return;
+}
+
+static int
+alignfile(int fd, off_t *foffset)
+{
+        static const char buffer[4] = {};
+	const size_t len = roundup(*foffset, 4) - *foffset;
+
+	if ((size_t)write(fd, buffer, len) != len)
+		return 0;
+	*foffset += (off_t)len;
+
+        return 1;
+}
+
+static int
+writenote(struct memelfnote *men, int fd, off_t *foffset)
+{
+        const Elf_Nhdr en = {
+		.n_namesz = strlen(men->name) + 1,
+		.n_descsz = men->datasz,
+		.n_type   = men->type,
+	};
+
+	if (write(fd, &en, sizeof(en)) != sizeof(en))
+		return 0;
+	*foffset += sizeof(en);
+
+	if (write(fd, men->name, en.n_namesz) != en.n_namesz)
+		return 0;
+	*foffset += en.n_namesz;
+
+        if (!alignfile(fd, foffset))
+                return 0;
+
+	if (write(fd, men->data, men->datasz) != men->datasz)
+		return 0;
+	*foffset += men->datasz;
+
+        if (!alignfile(fd, foffset))
+                return 0;
+
+        return 1;
+}
+
+static int
+test_tsk_thread_flag(ulong task, int bit, int *bool)
+{
+	ulong thread_info, flags;
+
+	thread_info = task_to_thread_info(task);
+
+	if (!readmem(thread_info + OFFSET(thread_info_flags), KVADDR, &flags,
+		     sizeof(flags), "test_tsk_thread_flag: flags",
+		     gcore_error_handle))
+		return 0;
+
+	*bool = (1UL << bit) & flags;
+
+	return 1;
+}
+
+static int
+get_desc_base(ulong desc, ulong *base)
+{
+	u16 base0;
+	unsigned int base1, base2;
+
+	if (!readmem(desc + MEMBER_OFFSET("desc_struct", "base0"), KVADDR,
+		     &base0, sizeof(base0), "get_desc_base: base0",
+		     gcore_error_handle))
+		return 0;
+
+	if (!readmem(desc + MEMBER_OFFSET("desc_struct", "base1"), KVADDR,
+		     &base1, sizeof(base1), "get_desc_base: base1",
+		     gcore_error_handle))
+		return 0;
+
+	if (!readmem(desc + MEMBER_OFFSET("desc_struct", "base2"), KVADDR,
+		     &base2, sizeof(base2), "get_desc_base: base2",
+		     gcore_error_handle))
+		return 0;
+
+	*base = base0 | (base1 << 16) | (base2 << 24);
+
+        return 1;
+}
+
+static int
+get_pt_regs_from_stacktop(ulong task, struct pt_regs *regs)
+{
+	if (!readmem(machdep->get_stacktop(task) - sizeof(struct pt_regs),
+		     KVADDR, regs, sizeof(struct pt_regs),
+		     "get_pt_regs: pt_regs", gcore_error_handle))
+		return 0;
+
+	return 1;
+}
+
+/*
+ * Restore bx, r12, r13, r14 and r15. Only bp can be resumed without
+ * resort to Dwarf CFI.
+ */
+static inline int
+restore_rest(struct pt_regs *regs)
+{
+	verbose23f("bx, r12, r13, r14 and r15 are bogus.\n");
+	return 1;
+}
+
+static int
+fill_pr_reg(ulong task, elf_gregset_t pr_reg)
+{
+	unsigned long usersp, rsp;
+	struct user_regs_struct *regs = (struct user_regs_struct *)pr_reg;
+
+	/*
+	 * vmcore generated by kdump contains NT_PRSTATUS including
+	 * general register values for active tasks.
+	 */
+	if (is_task_active(task) && KDUMP_DUMPFILE()) {
+		if (!get_x86_64_user_regs_struct_from_elf_notes(task,
+								(ulong **)
+								&regs))
+			goto error;
+		/*
+		 * If the task was in kernel-mode at the kernel crash,
+		 * note information is not what we would like.
+		 */
+		if (regs->cs & 0x3)
+			goto end;
+	}
+
+	if (!get_pt_regs_from_stacktop(task, (struct pt_regs *)regs))
+		goto error;
+
+	/*
+	 * regs->orig_ax is either a signal number if >= 0, or an IRQ
+	 * number if < 0.
+	 */
+	if ((int)regs->orig_ax >= 0) {
+		int nr_syscall = (int)regs->orig_ax;
+
+		/*
+		 * On X86_64, a user-mode stack pointer is saved in
+		 * per-CPU old_rsp variable, which is again saved at
+		 * __switch_to() in thread->usersp.
+		 */
+		if (!readmem(task + OFFSET(task_struct_thread) +
+			     MEMBER_OFFSET("thread_struct", "usersp"), KVADDR,
+			     &usersp, MEMBER_OFFSET("thread_struct", "usersp"),
+			     "fill_pr_reg: usersp", gcore_error_handle))
+			goto error;
+
+		regs->sp = usersp;
+
+		/*
+		 * clone(), fork(), vfork(), sigaltstack(), iopl(),
+		 * execve() and rt_sigreturn() save full part of
+		 * registe valus.
+		 */
+		if (nr_syscall == __NR_clone
+		    || nr_syscall == __NR_fork
+		    || nr_syscall == __NR_vfork
+		    || nr_syscall == __NR_sigaltstack
+		    || nr_syscall == __NR_iopl
+		    || nr_syscall == __NR_execve
+		    || nr_syscall == __NR_rt_sigreturn) {
+			goto end;
+
+		} else if (!restore_rest((struct pt_regs *)regs)) {
+			goto error;
+		}
+
+	} else {
+		int vector = (int)~regs->orig_ax;
+
+		if (vector < 0 || vector > 255)
+			verbose23f("unexpected IRQ number: %d.\n", vector);
+
+                /* Exceptions and NMI */
+		else if (vector < 20)
+			goto end;
+
+                /* reserved by Intel */
+		else if (vector < 32)
+			verbose23f("IRQ number %d is reserved by Intel\n", vector);
+
+                /* Muskable Interrupts */
+		else if (vector < 256) {
+			if (!restore_rest((struct pt_regs *)regs))
+				goto error;
+
+		}
+
+	}
+
+	/*
+	 * rsp is saved in task->thread.sp during switch_to().
+	 */
+	if (!readmem(task + OFFSET(task_struct_thread) +
+		     OFFSET(thread_struct_rsp), KVADDR, &rsp, sizeof(rsp),
+		     "fill_pr_reg: rsp", gcore_error_handle))
+		goto error;
+
+	if (!readmem(rsp, KVADDR, &regs->bp, sizeof(regs->bp),
+		     "fill_pr_reg: regs->bp", gcore_error_handle))
+		goto error;
+
+	/*
+	 * resume to the last rbp in user-mode.
+	 */
+	while (IS_KVADDR(regs->bp)) {
+		if (!readmem(regs->bp, KVADDR, &regs->bp, sizeof(regs->bp),
+			     "fill_pr_reg: regs-bp", gcore_error_handle))
+			goto error;
+	}
+
+	{
+		unsigned int seg;
+		ulong fs;
+
+		if (!readmem(task + OFFSET(task_struct_thread) +
+			     MEMBER_OFFSET("thread_struct", "fs"), KVADDR, &fs,
+			     MEMBER_SIZE("thread_struct", "fs"),
+			     "fill_pr_reg: fs", gcore_error_handle))
+			goto error;
+
+		if (fs != 0) {
+			regs->fs_base = fs;
+			goto end_fs_base;
+		}
+
+		if (!readmem(task + OFFSET(task_struct_thread) +
+			     MEMBER_OFFSET("thread_struct", "fsindex"), KVADDR,
+			     &seg, MEMBER_SIZE("thread_struct", "fsindex"),
+			     "fill_pr_reg: fsindex", gcore_error_handle))
+			goto error;
+
+		if (seg != FS_TLS_SEL) {
+			regs->fs_base = 0;
+			goto end_fs_base;
+		}
+
+		if (!get_desc_base(task + OFFSET(task_struct_thread) +
+				   FS_TLS * SIZE(desc_struct), &regs->fs_base))
+			goto error;
+	}
+end_fs_base:
+
+	{
+		unsigned int seg;
+		ulong gs;
+
+		if (!readmem(task + OFFSET(task_struct_thread) +
+			     MEMBER_OFFSET("thread_struct", "gsindex"), KVADDR,
+			     &seg, MEMBER_SIZE("thread_struct", "gsindex"),
+			     "fill_pr_reg: gsindex", gcore_error_handle))
+			goto error;
+
+		if (!readmem(task + OFFSET(task_struct_thread) +
+			     MEMBER_OFFSET("thread_struct", "gs"), KVADDR,
+			     &gs, MEMBER_SIZE("thread_struct", "gs"),
+			     "fill_pr_reg: gs", gcore_error_handle))
+			goto error;
+
+		if (gs) {
+			regs->gs_base = gs;
+			goto end_gs_base;
+		}
+
+                if (seg != GS_TLS_SEL) {
+			regs->gs_base = 0;
+			goto end_gs_base;
+		}
+
+		if (!get_desc_base(task + OFFSET(task_struct_thread) +
+				   GS_TLS * SIZE(desc_struct), &regs->gs_base))
+			goto error;
+        }
+end_gs_base:
+
+	{
+		int onbit;
+
+		if (!test_tsk_thread_flag(task, TIF_FORCED_TF, &onbit))
+			goto error;
+
+		if (onbit)
+			regs->flags &= ~X86_EFLAGS_TF;
+	}
+
+	{
+		unsigned short seg;
+
+		if (!readmem(task + OFFSET(task_struct_thread) +
+			     MEMBER_OFFSET("thread_struct", "fsindex"), KVADDR,
+			     &seg, sizeof(seg), "fill_pr_reg: fsindex",
+			     gcore_error_handle))
+			goto error;
+
+		regs->fs = seg;
+	}
+
+	{
+		unsigned short seg;
+
+		if (!readmem(task + OFFSET(task_struct_thread) +
+			     MEMBER_OFFSET("thread_struct", "gsindex"), KVADDR,
+			     &seg, sizeof(seg), "fill_pr_reg: gsindex",
+			     gcore_error_handle))
+			goto error;
+
+		regs->gs = seg;
+	}
+
+	{
+		unsigned short seg;
+
+		if (!readmem(task + OFFSET(task_struct_thread) +
+			     MEMBER_OFFSET("thread_struct", "es"), KVADDR,
+			     &seg, sizeof(seg), "fill_pr_reg: es",
+			     gcore_error_handle))
+			goto error;
+
+		regs->es = seg;
+	}
+
+end:
+	return 1;
+
+error:
+	return 0;
+}
+
+static int
+fill_thread_core_info(struct elf_thread_core_info *t, long signr, size_t *total,
+		      struct thread_group_list *tglist)
+{
+	int active;
+
+        /*
+         * NT_PRSTATUS is the one special case, because the regset
+         * data goes into the pr_reg field inside the note contents,
+         * rather than being the whole note contents.  We fill the
+         * reset in here.  We assume that regset 0 is NT_PRSTATUS.
+         */
+        if (!fill_prstatus(&t->prstatus, t->task, signr, tglist))
+		return 0;
+	if (!fill_pr_reg(t->task, t->prstatus.pr_reg))
+		return 0;
+	fill_note(&t->notes[0], "CORE", NT_PRSTATUS, sizeof(t->prstatus),
+		  &t->prstatus);
+	*total += notesize(&t->notes[0]);
+
+	if (!xfpregs_active(t->task, &active))
+		return 0;
+	else if (active) {
+		union thread_xstate *xstate;
+
+		xstate = malloc(sizeof(union thread_xstate));
+		if (!xstate)
+			return 0;
+		if (!fill_xfpregs(t->task, xstate)) {
+			free(xstate);
+			return 0;
+		}
+		fill_note(&t->notes[1], "CORE", NT_PRFPREG,
+			  sizeof(xstate->fsave), &xstate->fsave);
+		*total += notesize(&t->notes[1]);
+	}
+
+	if (!xstateregs_active(t->task, &active))
+		return 0;
+	else if (active) {
+		union thread_xstate *xstate;
+
+		xstate = malloc(sizeof(*xstate));
+		if (!xstate)
+			return 0;
+		if (!fill_xstate(t->task, xstate)) {
+			free(xstate);
+			return 0;
+		}
+		fill_note(&t->notes[2], "LINUX", NT_X86_XSTATE,
+			  sizeof(xstate->xsave), &xstate->xsave);
+		*total += notesize(&t->notes[2]);
+	}
+
+	if (!ioperm_active(t->task, &active))
+		return 0;
+	else if (active) {
+		void *data;
+
+		data = malloc(IO_BITMAP_LONGS * sizeof(long));
+		if (!data)
+			return 0;
+		if (!fill_ioperm(t->task, data)) {
+			free(data);
+			return 0;
+		}
+		fill_note(&t->notes[3], "LINUX", NT_386_IOPERM,
+			  IO_BITMAP_LONGS * sizeof(long), data);
+		*total += notesize(&t->notes[3]);
+	}
+
+	return 1;
+}
+
+static int
+fill_note_info(struct elf_note_info *info, long signr,
+	       struct thread_group_list *tglist)
+{
+	struct thread_group_list *l;
+	struct elf_thread_core_info *t;
+	struct elf_prpsinfo *psinfo = NULL;
+	ulong dump_task;
+
+	info->size = 0;
+	info->thread = NULL;
+
+	psinfo = malloc(sizeof(struct elf_prpsinfo));
+        if (!psinfo)
+                return 0;
+
+        fill_note(&info->psinfo, "CORE", NT_PRPSINFO, sizeof(struct elf_prpsinfo), psinfo);
+
+	/* head task is always a dump target */
+	dump_task = tglist->task;
+
+	/* The number of thread note information is controlled
+	 * manually, which must be hard-coded.
+	 *
+	 * Currently, notes[] includes note information below in the
+	 * order the list indicates:
+         *  [0] NT_PRSTATUS
+         *  [1] NT_PRFPREG
+	 *  [2] NT_X86_XSTATE
+         *  [3] NT_IO_PERM
+	 */
+	info->thread_notes = 4;
+
+	/* allocate data structures for each threads' information */
+	for (l = tglist; l; l = l->next) {
+		struct elf_thread_core_info *new;
+		size_t entry_size;
+
+		entry_size = offsetof(struct elf_thread_core_info,
+				notes[info->thread_notes]);
+		new = malloc(entry_size);
+		if (!new)
+			return 0;
+		memset(new, 0, entry_size);
+		new->task = l->task;
+		if (!info->thread || l->task == dump_task) {
+			new->next = info->thread;
+			info->thread = new;
+		} else {
+			/* keep dump_task in the head position */
+			new->next = info->thread->next;
+			info->thread->next = new;
+		}
+	}
+
+	for (t = info->thread; t; t = t->next)
+                if (!fill_thread_core_info(t, signr, &info->size, tglist))
+                        return 0;
+
+        /*
+	 * Fill in the two process-wide notes.
+         */
+        if (!fill_psinfo(psinfo, dump_task))
+		return 0;
+        info->size += notesize(&info->psinfo);
+
+	if (!fill_auxv_note(&info->auxv, dump_task))
+		return 0;
+	info->size += notesize(&info->auxv);
+
+	return 1;
+}
+
+static int
+write_note_info(int fd, struct elf_note_info *info, off_t *foffset)
+{
+        int first = 1;
+        struct elf_thread_core_info *t = info->thread;
+
+        do {
+                int i;
+
+                if (!writenote(&t->notes[0], fd, foffset))
+                        return 0;
+
+                if (first && !writenote(&info->psinfo, fd, foffset))
+                        return 0;
+
+		if (first && !writenote(&info->auxv, fd, foffset))
+			return 0;
+
+                for (i = 1; i < info->thread_notes; ++i)
+                        if (t->notes[i].data &&
+                            !writenote(&t->notes[i], fd, foffset))
+                                return 0;
+
+                first = 0;
+                t = t->next;
+        } while (t);
+
+        return 1;
+}
+
+static size_t
+get_note_info_size(struct elf_note_info *info)
+{
+	return info->size;
+}
+
+static void
+free_note_info(struct elf_note_info *info)
+{
+	struct elf_thread_core_info *t, *prev;
+	const int thread_notes = info->thread_notes;
+	int i;
+
+	t = info->thread;
+	while (t) {
+		prev = t;
+		t = t->next;
+		for (i = 1; i < thread_notes; i++) {
+			free(prev->notes[i].data);
+		}
+		free(prev);
+	}
+}
+
+static int vma_file_i_nlink(ulong vm_file, ulong *i_nlink)
+{
+	ulong d_inode;
+
+	if (!readmem(vm_file + OFFSET(file_f_path) + OFFSET(path_dentry),
+		     KVADDR, &d_inode, sizeof(d_inode),
+		     "vma_file_i_nlink: file_f_path", gcore_error_handle))
+		return 0;
+
+	if (!readmem(d_inode + MEMBER_OFFSET("inode", "i_nlink"), KVADDR,
+		     i_nlink, MEMBER_OFFSET("inode", "i_nlink"),
+		     "vma_file_i_nlink: i_nlink",
+		     gcore_error_handle))
+		return 0;
+
+	return 1;
+}
+
+
+/*
+ * Decide what to dump of a segment, part, all or none.
+ */
+static int
+vma_dump_size(ulong vma, ulong *size)
+{
+#define FILTER(type)    (gcore_filter & (1UL << MMF_DUMP_##type))
+
+	char *vma_cache;
+	ulong vm_start, vm_end, vm_flags, vm_file, vm_pgoff, anon_vma;
+
+	vma_cache = fill_vma_cache(vma);
+	vm_start = ULONG(vma_cache + OFFSET(vm_area_struct_vm_start));
+	vm_end = ULONG(vma_cache + OFFSET(vm_area_struct_vm_end));
+	vm_flags = ULONG(vma_cache + OFFSET(vm_area_struct_vm_flags));
+	vm_file = ULONG(vma_cache + OFFSET(vm_area_struct_vm_file));
+	vm_pgoff = ULONG(vma_cache + OFFSET(vm_area_struct_vm_pgoff));
+	anon_vma = ULONG(vma_cache +
+			 MEMBER_OFFSET("vm_area_struct", "anon_vma"));
+
+        /* The vma can be set up to tell us the answer directly.  */
+        if (vm_flags & VM_ALWAYSDUMP)
+                goto whole;
+
+        /* Hugetlb memory check */
+	if (vm_flags & VM_HUGETLB)
+		if ((vm_flags & VM_SHARED)
+		    ? FILTER(HUGETLB_SHARED)
+		    : FILTER(HUGETLB_PRIVATE))
+			goto whole;
+
+        /* Do not dump I/O mapped devices or special mappings */
+        if (vm_flags & (VM_IO | VM_RESERVED))
+		goto nothing;
+
+        /* By default, dump shared memory if mapped from an anonymous file. */
+        if (vm_flags & VM_SHARED) {
+		ulong i_nlink;
+
+		if (!vma_file_i_nlink(vm_file, &i_nlink))
+			goto error;
+
+                if (i_nlink == 0 ? FILTER(ANON_SHARED) : FILTER(MAPPED_SHARED))
+                        goto whole;
+
+		goto nothing;
+        }
+
+        /* Dump segments that have been written to.  */
+        if (anon_vma && FILTER(ANON_PRIVATE))
+                goto whole;
+        if (!vm_file)
+		goto nothing;
+
+        if (FILTER(MAPPED_PRIVATE))
+                goto whole;
+
+        /*
+         * If this looks like the beginning of a DSO or executable mapping,
+         * check for an ELF header.  If we find one, dump the first page to
+         * aid in determining what was mapped here.
+         */
+        if (FILTER(ELF_HEADERS) &&
+            vm_pgoff == 0 && (vm_flags & VM_READ)) {
+		ulong header = vm_start;
+		uint32_t word;
+                /*
+                 * Doing it this way gets the constant folded by GCC.
+                 */
+                union {
+                        uint32_t cmp;
+                        char elfmag[SELFMAG];
+                } magic;
+                magic.elfmag[EI_MAG0] = ELFMAG0;
+                magic.elfmag[EI_MAG1] = ELFMAG1;
+                magic.elfmag[EI_MAG2] = ELFMAG2;
+                magic.elfmag[EI_MAG3] = ELFMAG3;
+		if (!readmem(header, UVADDR, &word, sizeof(magic.elfmag),
+			     "read ELF page", gcore_error_handle))
+			goto error;
+                if (word == magic.cmp)
+			goto pagesize;
+        }
+
+#undef  FILTER
+
+nothing:
+	*size = 0;
+        return 1;
+
+whole:
+	*size = vm_end - vm_start;
+        return 1;
+
+pagesize:
+	*size = PAGE_SIZE;
+	return 1;
+
+error:
+	return 0;
+}
+
+static int
+fill_thread_group(struct thread_group_list **tglist,
+		  const struct task_context *tc)
+{
+	ulong i;
+	struct task_context *t;
+	struct thread_group_list *l;
+	const uint tgid = task_tgid(tc->task);
+	const ulong lead_pid = tc->pid;
+
+	t = FIRST_CONTEXT();
+	l = NULL;
+	for (i = 0; i < RUNNING_TASKS(); i++, t++) {
+		if (task_tgid(t->task) == tgid) {
+			struct thread_group_list *new;
+
+			new = malloc(sizeof(struct thread_group_list));
+			if (!new)
+				return 0;
+
+			if (t->pid == lead_pid || !l) {
+				new->task = t->task;
+				new->next = l;
+				l = new;
+			} else if (l) {
+				new->task = t->task;
+				new->next = l->next;
+				l->next = new;
+			}
+		}
+	}
+	*tglist = l;
+
+	return 1;
+}
+
+static void
+free_thread_group(struct thread_group_list *tglist)
+{
+	struct thread_group_list *next, *l;
+
+	if (!tglist)
+		return;
+
+	for (l = tglist; l->next; l = next) {
+		next = l->next;
+		free(l);
+	}
+}
+
+static int
+format_corename(char *corename, struct task_context *tc)
+{
+	int n;
+
+	n = snprintf(corename, CORENAME_MAX_SIZE + 1, "core.%lu",
+		     task_tgid(tc->task));
+
+	return n >= 0;
+}
+
+static void
+fill_headers(Elf_Ehdr *elf, Elf_Shdr *shdr0, int segs)
+{
+	memset(elf, 0, sizeof(Elf_Ehdr));
+	memcpy(elf->e_ident, ELFMAG, SELFMAG);
+	elf->e_ident[EI_CLASS] = ELF_CLASS;
+	elf->e_ident[EI_DATA] = ELF_DATA;
+	elf->e_ident[EI_VERSION] = EV_CURRENT;
+	elf->e_ident[EI_OSABI] = ELF_OSABI;
+	elf->e_ehsize = sizeof(Elf_Ehdr);
+	elf->e_phentsize = sizeof(Elf_Phdr);
+	elf->e_phnum = segs >= PN_XNUM ? PN_XNUM : segs;
+	if (elf->e_phnum == PN_XNUM) {
+		elf->e_shoff = elf->e_phentsize;
+		elf->e_shentsize = sizeof(Elf_Shdr);
+		elf->e_shnum = 1;
+		elf->e_shstrndx = SHN_UNDEF;
+	}
+	elf->e_type = ET_CORE;
+	elf->e_machine = ELF_MACHINE;
+	elf->e_version = EV_CURRENT;
+	elf->e_phoff = sizeof(Elf_Ehdr) + elf->e_shentsize * elf->e_shnum;
+	elf->e_flags = ELF_CORE_EFLAGS;
+
+	if (elf->e_phnum == PN_XNUM) {
+		memset(shdr0, 0, sizeof(Elf_Shdr));
+		shdr0->sh_type = SHT_NULL;
+		shdr0->sh_size = elf->e_shnum;
+		shdr0->sh_link = elf->e_shstrndx;
+		shdr0->sh_info = segs;
+	}
+}
+
+static ulong next_vma(ulong this_vma)
+{
+	return ULONG(fill_vma_cache(this_vma) + OFFSET(vm_area_struct_vm_next));
+}
+
+static int
+write_elf_note_phdr(int fd, size_t size, off_t *offset)
+{
+	Elf_Phdr phdr;
+
+	memset(&phdr, 0, sizeof(phdr));
+
+        phdr.p_type = PT_NOTE;
+        phdr.p_offset = *offset;
+        phdr.p_filesz = size;
+
+	*offset += size;
+
+	return write(fd, &phdr, sizeof(phdr)) == sizeof(phdr);
+}
+
+void
+do_gcore(struct task_context *tc)
+{
+	const long signr = 0;
+	char corename[CORENAME_MAX_SIZE + 1] = {};
+	struct thread_group_list *tglist = NULL;
+	struct elf_note_info info;
+	Elf_Ehdr elf;
+	Elf_Shdr shdr0;
+	int fd, map_count, segs;
+	char *mm_cache;
+	ulong mm, mmap, vma;
+	off_t offset, foffset, dataoff;
+	struct task_context *orig_tc = NULL;
+	char *zero_page_buffer = NULL, *buffer = NULL;
+
+	memset(&info, 0, sizeof(info));
+
+	if (tc != CURRENT_CONTEXT()) {
+		orig_tc = CURRENT_CONTEXT();
+		if (!set_context(tc->task, tc->pid))
+			goto fail;
+	}
+
+	tt->last_mm_read = 1;
+	mm = task_mm(tc->task, TRUE);
+	if (!mm) {
+		if (!IS_LAST_MM_READ(0))
+			verbose3f("The user memory space does not exist.\n");
+		goto fail;
+	}
+	mm_cache = fill_mm_struct(mm);
+
+	verbose23f("Restoring the thread group ... ");
+	if (!fill_thread_group(&tglist, tc)) {
+		verbose23f("failed.\n");
+		goto fail;
+	}
+	verbose23f("done.\n");
+
+	verbose23f("Retrieving note information ... ");
+	if (!fill_note_info(&info, signr, tglist)) {
+		verbose23f("failed.\n");
+		goto fail;
+	}
+	verbose23f("done.\n");
+
+	mmap = ULONG(mm_cache + OFFSET(mm_struct_mmap));
+	map_count = INT(mm_cache + MEMBER_OFFSET("mm_struct", "map_count"));
+
+	segs = NR_NOTES + map_count;
+
+	fill_headers(&elf, &shdr0, segs);
+
+	if (!format_corename(corename, tc))
+		goto fail;
+
+	verbose23f("Opening file %s ... ", corename);
+	fd = open(corename, O_WRONLY|O_TRUNC|O_CREAT, S_IRUSR|S_IWUSR);
+	if (fd < 0) {
+		verbose23f("failed.\n");
+		goto fail;
+	}
+	verbose23f("done.\n");
+
+	verbose23f("Writing ELF header ... ");
+	if (write(fd, &elf, sizeof(elf)) != sizeof(elf)) {
+		verbose23f(" failed.\n");
+		goto fail_close;
+	}
+	verbose23f(" done.\n");
+
+	if (elf.e_phnum == PN_XNUM) {
+		verbose23f("Writing section header table ... ");
+		if (write(fd, &shdr0, sizeof(shdr0)) != sizeof(shdr0)) {
+			verbose23f("failed.\n");
+			goto fail_close;
+		}
+		verbose23f("done.\n");
+	}
+
+	offset = elf.e_ehsize +
+		(elf.e_phnum == PN_XNUM ? elf.e_shnum * elf.e_shentsize : 0) +
+		segs * elf.e_phentsize;
+	foffset = offset;
+
+	verbose23f("Writing a program header for PT_NOTE ... ");
+	if (!write_elf_note_phdr(fd, get_note_info_size(&info), &offset)) {
+		verbose23f("failed.\n");
+		goto fail_close;
+	}
+	verbose23f("done.\n");
+
+	dataoff = offset = roundup(offset, ELF_EXEC_PAGESIZE);
+
+	verbose23f("Writing program headers for PT_LOAD ... ");
+	for (vma = mmap; vma; vma = next_vma(vma)) {
+		char *vma_cache;
+		ulong vm_start, vm_end, vm_flags, dump_size;
+		Elf_Phdr phdr;
+
+		vma_cache = fill_vma_cache(vma);
+		vm_start = ULONG(vma_cache + OFFSET(vm_area_struct_vm_start));
+		vm_end = ULONG(vma_cache + OFFSET(vm_area_struct_vm_end));
+		vm_flags = ULONG(vma_cache + OFFSET(vm_area_struct_vm_flags));
+
+		if (!vma_dump_size(vma, &dump_size))
+			goto fail_close;
+
+		phdr.p_type = PT_LOAD;
+		phdr.p_offset = offset;
+		phdr.p_vaddr = vm_start;
+		phdr.p_paddr = 0;
+		phdr.p_filesz = dump_size;
+		phdr.p_memsz = vm_end - vm_start;
+		phdr.p_flags = vm_flags & VM_READ ? PF_R : 0;
+		if (vm_flags & VM_WRITE)
+			phdr.p_flags |= PF_W;
+		if (vm_flags & VM_EXEC)
+			phdr.p_flags |= PF_X;
+		phdr.p_align = ELF_EXEC_PAGESIZE;
+
+		offset += phdr.p_filesz;
+
+		if (write(fd, &phdr, sizeof(phdr)) != sizeof(phdr)) {
+			verbose23f("failed.\n");
+			goto fail_close;
+		}
+	}
+	verbose23f("done.\n");
+
+	verbose23f("Writing segment corresponding to PT_NOTE ... ");
+	if (!write_note_info(fd, &info, &foffset)) {
+		verbose23f("failed.\n");
+		goto fail_close;
+	}
+	verbose23f("done.\n");
+
+	zero_page_buffer = malloc(PAGE_SIZE);
+	if (!zero_page_buffer)
+		goto fail_close;
+	memset(zero_page_buffer, 0, PAGE_SIZE);
+
+	{
+		size_t len;
+
+		len = dataoff - foffset;
+		if ((size_t)write(fd, zero_page_buffer, len) != len)
+			goto fail_close;
+	}
+
+	buffer = malloc(PAGE_SIZE);
+	if (!buffer)
+		goto fail_close;
+
+	verbose23f("Writing segment corredponding to PT_LOAD ... ");
+	for (vma = mmap; vma; vma = next_vma(vma)) {
+		char *vma_cache;
+		ulong addr, end, vm_start, dump_size;
+
+		vma_cache = fill_vma_cache(vma);
+		vm_start = ULONG(vma_cache + OFFSET(vm_area_struct_vm_start));
+
+		if (!vma_dump_size(vma, &dump_size))
+			goto fail_close;
+
+		end = vm_start + dump_size;
+
+		for (addr = vm_start; addr < end; addr += PAGE_SIZE) {
+			physaddr_t paddr;
+
+			if (uvtop(tc, addr, &paddr, FALSE)) {
+				if (!readmem(paddr, PHYSADDR, buffer,
+					     PAGE_SIZE, "readmem vma list",
+					     gcore_error_handle)) {
+					verbose23f("failed.\n");
+					goto fail_close;
+				}
+			} else {
+				verbose3f("address translation failed at 0x%lx"
+					  " and instead, filled in with zero.",
+					  addr);
+				memset(buffer, 0, PAGE_SIZE);
+			}
+
+			if (write(fd, buffer, PAGE_SIZE) != PAGE_SIZE) {
+				verbose23f("failed.\n");
+				goto fail_close;
+			}
+		}
+	}
+	verbose23f("done.\n");
+
+fail_close:
+	close(fd);
+
+fail:
+	free(buffer);
+	free(zero_page_buffer);
+	free_thread_group(tglist);
+	free_note_info(&info);
+	if (orig_tc && !set_context(orig_tc->task, orig_tc->pid))
+		error(WARNING, "failed to resume the original task\n");
+
+	if (corename[0])
+		fprintf(fp, "Saved %s\n", corename);
+}
+#endif /* X86_64 */
diff --git a/extensions/libgcore/2.6.34/x86_64/gcore.h b/extensions/libgcore/2.6.34/x86_64/gcore.h
new file mode 100644
index 0000000..cb59815
--- /dev/null
+++ b/extensions/libgcore/2.6.34/x86_64/gcore.h
@@ -0,0 +1,651 @@
+/* gcore.h -- core analysis suite
+ *
+ * Copyright (C) 2010 FUJITSU LIMITED
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#ifndef GCORE_H_
+#define GCORE_H_
+
+#include <elf.h>
+#include <unistd.h>
+
+#ifdef X86_64
+enum { BITS_PER_LONGS = 64 };
+#endif
+
+/*
+ * Notes used in ET_CORE. Architectures export some of the arch register sets
+ * using the corresponding note types via the PTRACE_GETREGSET and
+ * PTRACE_SETREGSET requests.
+ */
+#define NT_PRSTATUS     1
+#define NT_PRFPREG      2
+#define NT_PRPSINFO     3
+#define NT_TASKSTRUCT   4
+#define NT_AUXV         6
+#define NT_PRXFPREG     0x46e62b7f      /* copied from gdb5.1/include/elf/common.h */
+#ifdef X86_64
+#define NT_386_TLS      0x200           /* i386 TLS slots (struct user_desc) */
+#define NT_386_IOPERM   0x201           /* x86 io permission bitmap (1=deny) */
+#define NT_X86_XSTATE   0x202           /* x86 extended state using xsave */
+#endif /* X86_64 */
+
+#ifdef X86_64
+typedef unsigned char      u8;
+typedef unsigned short int u16;
+typedef unsigned int       u32;
+typedef unsigned long      u64;
+#endif /* X86_64 */
+
+enum {NR_NOTES = 1};
+
+#define PN_XNUM 0xffff
+
+#define ELF_CORE_EFLAGS 0
+
+#ifdef X86_64
+#define ELF_EXEC_PAGESIZE 4096
+
+#define ELF_MACHINE EM_X86_64
+#define ELF_OSABI ELFOSABI_NONE
+
+#define ELF_CLASS ELFCLASS64
+#define ELF_DATA ELFDATA2LSB
+#define ELF_ARCH EM_X86_64
+
+#define Elf_Half Elf64_Half
+#define Elf_Word Elf64_Word
+#define Elf_Off Elf64_Off
+
+#define Elf_Ehdr Elf64_Ehdr
+#define Elf_Phdr Elf64_Phdr
+#define Elf_Shdr Elf64_Shdr
+#define Elf_Nhdr Elf64_Nhdr
+#endif /* X86_64 */
+
+/* Task command name length */
+#define TASK_COMM_LEN 16
+
+#define CORENAME_MAX_SIZE 128
+
+#ifdef X86_64
+
+/*
+ * Size of io_bitmap.
+ */
+#define IO_BITMAP_BITS  65536
+#define IO_BITMAP_BYTES (IO_BITMAP_BITS/8)
+#define IO_BITMAP_LONGS (IO_BITMAP_BYTES/sizeof(long))
+#define IO_BITMAP_OFFSET offsetof(struct tss_struct,io_bitmap)
+
+#define NCAPINTS	9	/* N 32-bit words worth of info */
+
+#define X86_FEATURE_FXSR        (0*32+24) /* FXSAVE/FXRSTOR, CR4.OSFXSR */
+#define X86_FEATURE_XMM         (0*32+25) /* "sse" */
+#define X86_FEATURE_XSAVE       (4*32+26) /* XSAVE/XRSTOR/XSETBV/XGETBV */
+
+/*
+ * Per process flags
+ */
+#define PF_USED_MATH    0x00002000      /* if unset the fpu must be initialized before use */
+
+/* Symbolic values for the entries in the auxiliary table
+   put on the initial stack */
+#define AT_NULL   0	/* end of vector */
+
+#define AT_VECTOR_SIZE  44 /* Size of auxiliary table.  */
+
+/*
+ * we cannot use the same code segment descriptor for user and kernel
+ * -- not even in the long flat mode, because of different DPL /kkeil
+ * The segment offset needs to contain a RPL. Grr. -AK
+ * GDT layout to get 64bit syscall right (sysret hardcodes gdt offsets)
+ */
+#define GDT_ENTRY_TLS_MIN 12
+#define GDT_ENTRY_TLS_ENTRIES 3
+
+/* TLS indexes for 64bit - hardcoded in arch_prctl */
+#define FS_TLS 0
+#define GS_TLS 1
+
+#define GS_TLS_SEL ((GDT_ENTRY_TLS_MIN+GS_TLS)*8 + 3)
+#define FS_TLS_SEL ((GDT_ENTRY_TLS_MIN+FS_TLS)*8 + 3)
+
+/*
+ * EFLAGS bits
+ */
+#define X86_EFLAGS_TF   0x00000100 /* Trap Flag */
+
+/*
+ * thread information flags
+ * - these are process state flags that various assembly files
+ *   may need to access
+ * - pending work-to-be-done flags are in LSW
+ * - other flags in MSW
+ * Warning: layout of LSW is hardcoded in entry.S
+ */
+#define TIF_FORCED_TF           24      /* true if TF in eflags artificially */
+
+/*                                                                                        * FIXME: Accessing the desc_struct through its fields is more elegant,
+ * and should be the one valid thing to do. However, a lot of open code
+ * still touches the a and b accessors, and doing this allow us to do it
+ * incrementally. We keep the signature as a struct, rather than an union,
+ * so we can get rid of it transparently in the future -- glommer
+ */
+/* 8 byte segment descriptor */
+struct desc_struct {
+        union {
+                struct {
+                        unsigned int a;
+                        unsigned int b;
+                };
+                struct {
+                        u16 limit0;
+			u16 base0;
+                        unsigned base1: 8, type: 4, s: 1, dpl: 2, p: 1;
+                        unsigned limit: 4, avl: 1, l: 1, d: 1, g: 1, base2: 8;
+                };
+        };
+} __attribute__((packed));
+
+#endif /* X86_64 */
+
+enum pid_type
+{
+        PIDTYPE_PID,
+        PIDTYPE_PGID,
+        PIDTYPE_SID,
+        PIDTYPE_MAX
+};
+
+struct elf_siginfo
+{
+        int     si_signo;                       /* signal number */
+	int     si_code;                        /* extra code */
+        int     si_errno;                       /* errno */
+};
+
+#ifdef X86_64
+
+#define USER_XSTATE_FX_SW_WORDS 6
+
+#define MXCSR_DEFAULT           0x1f80
+
+/* This matches the 64bit FXSAVE format as defined by AMD. It is the same
+   as the 32bit format defined by Intel, except that the selector:offset pairs for
+   data and eip are replaced with flat 64bit pointers. */ 
+struct user_i387_struct {
+	unsigned short	cwd;
+	unsigned short	swd;
+	unsigned short	twd; /* Note this is not the same as the 32bit/x87/FSAVE twd */
+	unsigned short	fop;
+	u64	rip;
+	u64	rdp;
+	u32	mxcsr;
+	u32	mxcsr_mask;
+	u32	st_space[32];	/* 8*16 bytes for each FP-reg = 128 bytes */
+	u32	xmm_space[64];	/* 16*16 bytes for each XMM-reg = 256 bytes */
+	u32	padding[24];
+};
+
+struct i387_fsave_struct {
+        u32                     cwd;    /* FPU Control Word             */
+        u32                     swd;    /* FPU Status Word              */
+        u32                     twd;    /* FPU Tag Word                 */
+        u32                     fip;    /* FPU IP Offset                */
+        u32                     fcs;    /* FPU IP Selector              */
+        u32                     foo;    /* FPU Operand Pointer Offset   */
+        u32                     fos;    /* FPU Operand Pointer Selector */
+
+        /* 8*10 bytes for each FP-reg = 80 bytes:                       */
+        u32                     st_space[20];
+
+        /* Software status information [not touched by FSAVE ]:         */
+        u32                     status;
+};
+
+struct i387_fxsave_struct {
+        u16                     cwd; /* Control Word                    */
+        u16                     swd; /* Status Word                     */
+        u16                     twd; /* Tag Word                        */
+        u16                     fop; /* Last Instruction Opcode         */
+        union {
+                struct {
+                        u64     rip; /* Instruction Pointer             */
+                        u64     rdp; /* Data Pointer                    */
+                };
+                struct {
+                        u32     fip; /* FPU IP Offset                   */
+                        u32     fcs; /* FPU IP Selector                 */
+                        u32     foo; /* FPU Operand Offset              */
+                        u32     fos; /* FPU Operand Selector            */
+                };
+        };
+        u32                     mxcsr;          /* MXCSR Register State */
+        u32                     mxcsr_mask;     /* MXCSR Mask           */
+
+        /* 8*16 bytes for each FP-reg = 128 bytes:                      */
+        u32                     st_space[32];
+
+        /* 16*16 bytes for each XMM-reg = 256 bytes:                    */
+        u32                     xmm_space[64];
+
+        u32                     padding[12];
+
+        union {
+                u32             padding1[12];
+                u32             sw_reserved[12];
+        };
+
+} __attribute__((aligned(16)));
+
+struct i387_soft_struct {
+        u32                     cwd;
+        u32                     swd;
+        u32                     twd;
+        u32                     fip;
+        u32                     fcs;
+        u32                     foo;
+        u32                     fos;
+        /* 8*10 bytes for each FP-reg = 80 bytes: */
+        u32                     st_space[20];
+        u8                      ftop;
+        u8                      changed;
+        u8                      lookahead;
+        u8                      no_update;
+        u8                      rm;
+        u8                      alimit;
+        struct math_emu_info    *info;
+        u32                     entry_eip;
+};
+
+struct ymmh_struct {
+        /* 16 * 16 bytes for each YMMH-reg = 256 bytes */
+        u32 ymmh_space[64];
+};
+
+struct xsave_hdr_struct {
+        u64 xstate_bv;
+        u64 reserved1[2];
+        u64 reserved2[5];
+} __attribute__((packed));
+
+struct xsave_struct {
+        struct i387_fxsave_struct i387;
+        struct xsave_hdr_struct xsave_hdr;
+        struct ymmh_struct ymmh;
+        /* new processor state extensions will go here */
+} __attribute__ ((packed, aligned (64)));
+
+union thread_xstate {
+        struct i387_fsave_struct        fsave;
+        struct i387_fxsave_struct       fxsave;
+        struct i387_soft_struct         soft;
+        struct xsave_struct             xsave;
+};
+
+struct pt_regs {
+        unsigned long r15;
+        unsigned long r14;
+        unsigned long r13;
+        unsigned long r12;
+        unsigned long bp;
+        unsigned long bx;
+/* arguments: non interrupts/non tracing syscalls only save upto here*/
+        unsigned long r11;
+        unsigned long r10;
+        unsigned long r9;
+        unsigned long r8;
+        unsigned long ax;
+        unsigned long cx;
+        unsigned long dx;
+        unsigned long si;
+        unsigned long di;
+        unsigned long orig_ax;
+/* end of arguments */
+/* cpu exception frame or undefined */
+        unsigned long ip;
+        unsigned long cs;
+        unsigned long flags;
+        unsigned long sp;
+        unsigned long ss;
+/* top of stack page */
+};
+
+/*
+ * Segment register layout in coredumps.
+ */
+struct user_regs_struct {
+	unsigned long	r15;
+	unsigned long	r14;
+	unsigned long	r13;
+	unsigned long	r12;
+	unsigned long	bp;
+	unsigned long	bx;
+	unsigned long	r11;
+	unsigned long	r10;
+	unsigned long	r9;
+	unsigned long	r8;
+	unsigned long	ax;
+	unsigned long	cx;
+	unsigned long	dx;
+	unsigned long	si;
+	unsigned long	di;
+	unsigned long	orig_ax;
+	unsigned long	ip;
+	unsigned long	cs;
+	unsigned long	flags;
+	unsigned long	sp;
+	unsigned long	ss;
+	unsigned long	fs_base;
+	unsigned long	gs_base;
+	unsigned long	ds;
+	unsigned long	es;
+	unsigned long	fs;
+	unsigned long	gs;
+};
+
+#define REGNUM(reg)					\
+	((offsetof(struct user_regs_struct, reg)) / sizeof(unsigned long))
+
+#endif /* X86_64 */
+
+typedef ulong elf_greg_t;
+
+#define ELF_NGREG (sizeof(struct user_regs_struct) / sizeof(elf_greg_t))
+typedef elf_greg_t elf_gregset_t[ELF_NGREG];
+
+/* Parameters used to convert the timespec values: */
+#define NSEC_PER_USEC   1000L
+#define NSEC_PER_SEC    1000000000L
+
+/* The clock frequency of the i8253/i8254 PIT */
+#define PIT_TICK_RATE 1193182ul
+
+/* Assume we use the PIT time source for the clock tick */
+#define CLOCK_TICK_RATE         PIT_TICK_RATE
+
+/* LATCH is used in the interval timer and ftape setup. */
+#define LATCH  ((CLOCK_TICK_RATE + HZ/2) / HZ)  /* For divider */
+
+/* Suppose we want to devide two numbers NOM and DEN: NOM/DEN, then we can
+ * improve accuracy by shifting LSH bits, hence calculating:
+ *     (NOM << LSH) / DEN
+ * This however means trouble for large NOM, because (NOM << LSH) may no
+ * longer fit in 32 bits. The following way of calculating this gives us
+ * some slack, under the following conditions:
+ *   - (NOM / DEN) fits in (32 - LSH) bits.
+ *   - (NOM % DEN) fits in (32 - LSH) bits.
+ */
+#define SH_DIV(NOM,DEN,LSH) (   (((NOM) / (DEN)) << (LSH))              \
+				+ ((((NOM) % (DEN)) << (LSH)) + (DEN) / 2) / (DEN))
+
+/* HZ is the requested value. ACTHZ is actual HZ ("<< 8" is for accuracy) */
+#define ACTHZ (SH_DIV (CLOCK_TICK_RATE, LATCH, 8))
+
+/* TICK_NSEC is the time between ticks in nsec assuming real ACTHZ */
+#define TICK_NSEC (SH_DIV (1000000UL * 1000, ACTHZ, 8))
+
+#define cputime_add(__a, __b)           ((__a) +  (__b))
+#define cputime_sub(__a, __b)           ((__a) -  (__b))
+
+typedef unsigned long cputime_t;
+
+#define cputime_zero                    (0UL)
+
+/**
+ * struct task_cputime - collected CPU time counts
+ * @utime:              time spent in user mode, in &cputime_t units
+ * @stime:              time spent in kernel mode, in &cputime_t units
+ * @sum_exec_runtime:   total time spent on the CPU, in nanoseconds
+ *
+ * This structure groups together three kinds of CPU time that are
+ * tracked for threads and thread groups.  Most things considering
+ * CPU time want to group these counts together and treat all three
+ * of them in parallel.
+ */
+struct task_cputime {
+        cputime_t utime;
+        cputime_t stime;
+        unsigned long long sum_exec_runtime;
+};
+
+#define INIT_CPUTIME    \
+        (struct task_cputime) {                                 \
+                .utime = cputime_zero,                          \
+			.stime = cputime_zero,                          \
+			.sum_exec_runtime = 0,                          \
+			}
+
+/**
+ * div_u64_rem - unsigned 64bit divide with 32bit divisor with remainder
+ *
+ * This is commonly provided by 32bit archs to provide an optimized 64bit
+ * divide.
+ */
+static inline u64 div_u64_rem(u64 dividend, u32 divisor, u32 *remainder)
+{
+        *remainder = dividend % divisor;
+        return dividend / divisor;
+}
+
+static inline void
+jiffies_to_timeval(const unsigned long jiffies, struct timeval *value)
+{
+        /*
+         * Convert jiffies to nanoseconds and separate with
+         * one divide.
+         */
+        u32 rem;
+
+        value->tv_sec = div_u64_rem((u64)jiffies * TICK_NSEC,
+                                    NSEC_PER_SEC, &rem);
+        value->tv_usec = rem / NSEC_PER_USEC;
+}
+
+#define cputime_to_timeval(__ct,__val)  jiffies_to_timeval(__ct,__val)
+
+/*
+ * Definitions to generate Intel SVR4-like core files.
+ * These mostly have the same names as the SVR4 types with "elf_"
+ * tacked on the front to prevent clashes with linux definitions,
+ * and the typedef forms have been avoided.  This is mostly like
+ * the SVR4 structure, but more Linuxy, with things that Linux does
+ * not support and which gdb doesn't really use excluded.
+ * Fields present but not used are marked with "XXX".
+ */
+struct elf_prstatus
+{
+	struct elf_siginfo pr_info;	/* Info associated with signal */
+	short	pr_cursig;		/* Current signal */
+	unsigned long pr_sigpend;	/* Set of pending signals */
+	unsigned long pr_sighold;	/* Set of held signals */
+	int	pr_pid;
+	int	pr_ppid;
+	int	pr_pgrp;
+	int	pr_sid;
+	struct timeval pr_utime;	/* User time */
+	struct timeval pr_stime;	/* System time */
+	struct timeval pr_cutime;	/* Cumulative user time */
+	struct timeval pr_cstime;	/* Cumulative system time */
+	elf_gregset_t pr_reg;	/* GP registers */
+	int pr_fpvalid;		/* True if math co-processor being used.  */
+};
+
+#ifdef X86_64
+typedef unsigned short __kernel_old_uid_t;
+typedef unsigned short __kernel_old_gid_t;
+
+typedef __kernel_old_uid_t      old_uid_t;
+typedef __kernel_old_gid_t      old_gid_t;
+#endif /* X86_64 */
+
+#define overflowuid (symbol_exists("overflowuid"))
+#define overflowgid (symbol_exists("overflowgid"))
+
+/* prevent uid mod 65536 effect by returning a default value for high UIDs */
+#define high2lowuid(uid) ((uid) & ~0xFFFF ? (old_uid_t)overflowuid : (old_uid_t)(uid))
+#define high2lowgid(gid) ((gid) & ~0xFFFF ? (old_gid_t)overflowgid : (old_gid_t)(gid))
+
+#define __convert_uid(size, uid) \
+        (size >= sizeof(uid) ? (uid) : high2lowuid(uid))
+#define __convert_gid(size, gid) \
+        (size >= sizeof(gid) ? (gid) : high2lowgid(gid))
+
+/* uid/gid input should be always 32bit uid_t */
+#define SET_UID(var, uid) do { (var) = __convert_uid(sizeof(var), (uid)); } while (0)
+#define SET_GID(var, gid) do { (var) = __convert_gid(sizeof(var), (gid)); } while (0)
+
+/*
+ * Priority of a process goes from 0..MAX_PRIO-1, valid RT
+ * priority is 0..MAX_RT_PRIO-1, and SCHED_NORMAL/SCHED_BATCH
+ * tasks are in the range MAX_RT_PRIO..MAX_PRIO-1. Priority
+ * values are inverted: lower p->prio value means higher priority.
+ *
+ * The MAX_USER_RT_PRIO value allows the actual maximum RT priority to
+ * be separate from the value exported to user-space.  This allows
+ * kernel threads to set their priority to a value higher than any
+ * user task. Note: MAX_RT_PRIO must not be smaller than
+ * MAX_USER_RT_PRIO.
+ */
+
+#define MAX_USER_RT_PRIO        100
+#define MAX_RT_PRIO             MAX_USER_RT_PRIO
+
+/*
+ * Convert user-nice values [ -20 ... 0 ... 19 ]
+ * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+ * and back.
+ */
+#define PRIO_TO_NICE(prio)      ((prio) - MAX_RT_PRIO - 20)
+#define TASK_NICE(p)            PRIO_TO_NICE((p)->static_prio)
+
+#ifdef X86_64
+/**
+ * ffz - find first zero bit in word
+ * @word: The word to search
+ *
+ * Undefined if no zero exists, so code should check against ~0UL first.
+ */
+static inline unsigned long
+ffz(unsigned long word)
+{
+        asm("bsf %1,%0"
+	    : "=r" (word)
+	    : "r" (~word));
+        return word;
+}
+#endif /* X86_64 */
+
+#ifdef X86_64
+typedef unsigned int __kernel_uid_t;
+typedef unsigned int __kernel_gid_t;
+#endif /* X86_64 */
+
+#define ELF_PRARGSZ     (80)    /* Number of chars for args */
+
+struct elf_prpsinfo
+{
+        char    pr_state;       /* numeric process state */
+        char    pr_sname;       /* char for pr_state */
+        char    pr_zomb;        /* zombie */
+        char    pr_nice;        /* nice val */
+        unsigned long pr_flag;  /* flags */
+        __kernel_uid_t  pr_uid;
+        __kernel_gid_t  pr_gid;
+        pid_t   pr_pid, pr_ppid, pr_pgrp, pr_sid;
+        /* Lots missing */
+        char    pr_fname[16];   /* filename of executable */
+        char    pr_psargs[ELF_PRARGSZ]; /* initial part of arg list */
+};
+
+/* An ELF note in memory */
+struct memelfnote
+{
+	const char *name;
+	int type;
+	unsigned int datasz;
+	void *data;
+};
+
+struct thread_group_list {
+	struct thread_group_list *next;
+	ulong task;
+};
+
+struct elf_thread_core_info {
+	struct elf_thread_core_info *next;
+	ulong task;
+	struct elf_prstatus prstatus;
+	struct memelfnote notes[0];
+};
+
+struct elf_note_info {
+	struct elf_thread_core_info *thread;
+	struct memelfnote psinfo;
+	struct memelfnote auxv;
+	size_t size;
+	int thread_notes;
+};
+
+/*
+ * gcore filter for bits
+ *
+ * This is originally deribed from coredump filter in Linux kernel.
+ */
+#define MMF_DUMP_ANON_PRIVATE    0
+#define MMF_DUMP_ANON_SHARED     1
+#define MMF_DUMP_MAPPED_PRIVATE  2
+#define MMF_DUMP_MAPPED_SHARED   3
+#define MMF_DUMP_ELF_HEADERS     4
+#define MMF_DUMP_HUGETLB_PRIVATE 5
+#define MMF_DUMP_HUGETLB_SHARED  6
+
+/*
+ * vm_flags in vm_area_struct, see mm_types.h.
+ */
+#define VM_READ		0x00000001	/* currently active flags */
+#define VM_WRITE	0x00000002
+#define VM_EXEC		0x00000004
+#define VM_SHARED	0x00000008
+
+#define VM_IO           0x00004000      /* Memory mapped I/O or similar */
+
+#define VM_RESERVED     0x00080000      /* Count as reserved_vm like IO */
+#define VM_HUGETLB      0x00400000      /* Huge TLB Page VM */
+#define VM_ALWAYSDUMP   0x04000000      /* Always include in core dumps */
+
+#define GCORE_FILTER_DEFAULT 0x23
+
+/*
+ * Thread-synchronous status.
+ *
+ * This is different from the flags in that nobody else
+ * ever touches our thread-synchronous status, so we don't
+ * have to worry about atomic accesses.
+ */
+#define TS_USEDFPU		0x0001	/* FPU was used by this task
+					   this quantum (SMP) */
+
+#ifdef X86_64
+
+#define __NR_fork		  2
+#define __NR_execve		 11
+#define __NR_iopl		110
+#define __NR_clone		120
+#define __NR_rt_sigreturn	173
+#define __NR_sigaltstack	186
+#define __NR_vfork		190
+
+#endif /* X86_64 */
+
+#endif /* GCORE_H_ */
diff --git a/netdump.c b/netdump.c
index 3a5db13..af40474 100644
--- a/netdump.c
+++ b/netdump.c
@@ -2786,3 +2786,30 @@ get_ppc64_regs_from_elf_notes(struct task_context *tc)
 	
 	return pt_regs;
 }
+
+int
+get_x86_64_user_regs_struct_from_elf_notes(ulong task, ulong **regs)
+{
+	Elf64_Nhdr *note;
+	size_t len;
+	extern struct vmcore_data *nd;
+	struct task_context *tc;
+
+	tc = task_to_context(task);
+
+	if (tc->processor >= nd->num_prstatus_notes)
+		return 0;
+
+	if (nd->num_prstatus_notes > 1) {
+		note = (Elf64_Nhdr *)nd->nt_prstatus_percpu[tc->processor];
+	} else {
+		note = (Elf64_Nhdr *)nd->nt_prstatus;
+	}
+
+	len = sizeof(Elf64_Nhdr);
+	len = roundup(len + note->n_namesz, 4);
+	*regs = (void *)((char *)note + len +
+			 MEMBER_OFFSET("elf_prstatus", "pr_reg"));
+	
+	return 1;
+}
diff --git a/tools.c b/tools.c
index 22f8bbd..93fd98a 100755
--- a/tools.c
+++ b/tools.c
@@ -2234,7 +2234,6 @@ cmd_set(void)
                                	    *diskdump_flags & ZERO_EXCLUDED ? 
 					"on" : "off");
 			return;
-
 		} else if (XEN_HYPER_MODE()) {
 			error(FATAL, "invalid argument for the Xen hypervisor\n");
 		} else if (runtime) {
--
Crash-utility mailing list
Crash-utility@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/crash-utility

[Index of Archives]     [Fedora Development]     [Fedora Desktop]     [Fedora SELinux]     [Yosemite News]     [KDE Users]     [Fedora Tools]

 

Powered by Linux