[RFC,v3,3/8] Add basic Linux kernel support

Message ID 20170316165739.88524-4-prudo@linux.vnet.ibm.com
State New, archived
Headers

Commit Message

Philipp Rudo March 16, 2017, 4:57 p.m. UTC
  This patch implements a basic target_ops for Linux kernel support. In
particular it models Linux tasks as GDB threads such that you are able to
change to a given thread, get backtraces, disassemble the current frame
etc..

Currently the target_ops is designed only to work with static targets, i.e.
dumps. Thus it lacks implementation for hooks like to_wait, to_resume or
to_store_registers. Furthermore the mapping between a CPU and the
task_struct of the running task is only be done once at initialization. See
cover letter for a detailed discussion.

Nevertheless i made some design decisions different to Peter [1] which are
worth discussing. Especially storing the private data in a htab (or
std::unordered_map if i had the time...) instead of global variables makes
the code much nicer and less memory consuming.

[1] https://sourceware.org/ml/gdb-patches/2016-12/msg00382.html

gdb/ChangeLog:

    * gdbarch.sh (lk_init_private): New hook.
    * gdbarch.h: Regenerated.
    * gdbarch.c: Regenerated.
    * lk-low.h: New file.
    * lk-low.c: New file.
    * lk-lists.h: New file.
    * lk-lists.c: New file.
    * Makefile.in (SFILES, ALLDEPFILES): Add lk-low.c and lk-lists.c.
    (HFILES_NO_SRCDIR): Add lk-low.h and lk-lists.h.
    (ALL_TARGET_OBS): Add lk-low.o and lk-lists.o.
    * configure.tgt (lk_target_obs): New variable with object files for Linux
      kernel support.
      (s390*-*-linux*): Add lk_target_obs.
---
 gdb/Makefile.in   |   8 +
 gdb/configure.tgt |   6 +-
 gdb/gdbarch.c     |  31 ++
 gdb/gdbarch.h     |   7 +
 gdb/gdbarch.sh    |   4 +
 gdb/lk-lists.c    |  47 +++
 gdb/lk-lists.h    |  56 ++++
 gdb/lk-low.c      | 833 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 gdb/lk-low.h      | 310 ++++++++++++++++++++
 9 files changed, 1301 insertions(+), 1 deletion(-)
 create mode 100644 gdb/lk-lists.c
 create mode 100644 gdb/lk-lists.h
 create mode 100644 gdb/lk-low.c
 create mode 100644 gdb/lk-low.h
  

Comments

Omair Javaid April 16, 2017, 10:58 p.m. UTC | #1
Hi Philip,

I like your handling of linux kernel data structures though I havent
been able to get your code working on arm.

There are some challenges with regards to live debugging support which
I am trying to figure out. There is no reliable way to tell between a
kernel direct mapped address, vmalloc address and module address when
we also have user address available.

Also there is no way to switch between stratums if we need to do so in
case we try to support switching between userspace and kernel space.

As far as this patch is concerned there are no major issues that can
block me from further progressing towards live debugging support.

I have compiled this patch with arm support on top and overall this
looks good. See some minor inline comments.

Yao: Kindly check if there are any coding convention or styling issues here.

PS: I have not looked at module support or s390 target specific code in detail.

Thanks!

--
Omair


On 16 March 2017 at 21:57, Philipp Rudo <prudo@linux.vnet.ibm.com> wrote:
> This patch implements a basic target_ops for Linux kernel support. In
> particular it models Linux tasks as GDB threads such that you are able to
> change to a given thread, get backtraces, disassemble the current frame
> etc..
>
> Currently the target_ops is designed only to work with static targets, i.e.
> dumps. Thus it lacks implementation for hooks like to_wait, to_resume or
> to_store_registers. Furthermore the mapping between a CPU and the
> task_struct of the running task is only be done once at initialization. See
> cover letter for a detailed discussion.
>
> Nevertheless i made some design decisions different to Peter [1] which are
> worth discussing. Especially storing the private data in a htab (or
> std::unordered_map if i had the time...) instead of global variables makes
> the code much nicer and less memory consuming.
>
> [1] https://sourceware.org/ml/gdb-patches/2016-12/msg00382.html
>
> gdb/ChangeLog:
>
>     * gdbarch.sh (lk_init_private): New hook.
>     * gdbarch.h: Regenerated.
>     * gdbarch.c: Regenerated.
>     * lk-low.h: New file.
>     * lk-low.c: New file.
>     * lk-lists.h: New file.
>     * lk-lists.c: New file.
>     * Makefile.in (SFILES, ALLDEPFILES): Add lk-low.c and lk-lists.c.
>     (HFILES_NO_SRCDIR): Add lk-low.h and lk-lists.h.
>     (ALL_TARGET_OBS): Add lk-low.o and lk-lists.o.
>     * configure.tgt (lk_target_obs): New variable with object files for Linux
>       kernel support.
>       (s390*-*-linux*): Add lk_target_obs.
> ---
>  gdb/Makefile.in   |   8 +
>  gdb/configure.tgt |   6 +-
>  gdb/gdbarch.c     |  31 ++
>  gdb/gdbarch.h     |   7 +
>  gdb/gdbarch.sh    |   4 +
>  gdb/lk-lists.c    |  47 +++
>  gdb/lk-lists.h    |  56 ++++
>  gdb/lk-low.c      | 833 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  gdb/lk-low.h      | 310 ++++++++++++++++++++
>  9 files changed, 1301 insertions(+), 1 deletion(-)
>  create mode 100644 gdb/lk-lists.c
>  create mode 100644 gdb/lk-lists.h
>  create mode 100644 gdb/lk-low.c
>  create mode 100644 gdb/lk-low.h
>
> diff --git a/gdb/Makefile.in b/gdb/Makefile.in
> index 0818742..9387c66 100644
> --- a/gdb/Makefile.in
> +++ b/gdb/Makefile.in
> @@ -817,6 +817,8 @@ ALL_TARGET_OBS = \
>         iq2000-tdep.o \
>         linux-record.o \
>         linux-tdep.o \
> +       lk-lists.o \
> +       lk-low.o \
>         lm32-tdep.o \
>         m32c-tdep.o \
>         m32r-linux-tdep.o \
> @@ -1103,6 +1105,8 @@ SFILES = \
>         jit.c \
>         language.c \
>         linespec.c \
> +       lk-lists.c \
> +       lk-low.c \
>         location.c \
>         m2-exp.y \
>         m2-lang.c \
> @@ -1350,6 +1354,8 @@ HFILES_NO_SRCDIR = \
>         linux-nat.h \
>         linux-record.h \
>         linux-tdep.h \
> +       lk-lists.h \
> +       lk-low.h \
>         location.h \
>         m2-lang.h \
>         m32r-tdep.h \
> @@ -2547,6 +2553,8 @@ ALLDEPFILES = \
>         linux-fork.c \
>         linux-record.c \
>         linux-tdep.c \
> +       lk-lists.c \
> +       lk-low.c \
>         lm32-tdep.c \
>         m32r-linux-nat.c \
>         m32r-linux-tdep.c \
> diff --git a/gdb/configure.tgt b/gdb/configure.tgt
> index cb909e7..8d87fea 100644
> --- a/gdb/configure.tgt
> +++ b/gdb/configure.tgt
> @@ -34,6 +34,10 @@ case $targ in
>      ;;
>  esac
>
> +# List of objectfiles for Linux kernel support.  To be included into *-linux*
> +# targets wich support Linux kernel debugging.
> +lk_target_obs="lk-lists.o lk-low.o"
> +
>  # map target info into gdb names.
>
>  case "${targ}" in
> @@ -479,7 +483,7 @@ powerpc*-*-*)
>  s390*-*-linux*)
>         # Target: S390 running Linux
>         gdb_target_obs="s390-linux-tdep.o solib-svr4.o linux-tdep.o \
> -                       linux-record.o"
> +                       linux-record.o ${lk_target_obs}"
>         build_gdbserver=yes
>         ;;
>
> diff --git a/gdb/gdbarch.c b/gdb/gdbarch.c
> index 87eafb2..5509a6c 100644
> --- a/gdb/gdbarch.c
> +++ b/gdb/gdbarch.c
> @@ -349,6 +349,7 @@ struct gdbarch
>    gdbarch_addressable_memory_unit_size_ftype *addressable_memory_unit_size;
>    char ** disassembler_options;
>    const disasm_options_t * valid_disassembler_options;
> +  gdbarch_lk_init_private_ftype *lk_init_private;
>  };
>
>  /* Create a new ``struct gdbarch'' based on information provided by
> @@ -1139,6 +1140,12 @@ gdbarch_dump (struct gdbarch *gdbarch, struct ui_file *file)
>                        "gdbarch_dump: iterate_over_regset_sections = <%s>\n",
>                        host_address_to_string (gdbarch->iterate_over_regset_sections));
>    fprintf_unfiltered (file,
> +                      "gdbarch_dump: gdbarch_lk_init_private_p() = %d\n",
> +                      gdbarch_lk_init_private_p (gdbarch));
> +  fprintf_unfiltered (file,
> +                      "gdbarch_dump: lk_init_private = <%s>\n",
> +                      host_address_to_string (gdbarch->lk_init_private));
> +  fprintf_unfiltered (file,
>                        "gdbarch_dump: long_bit = %s\n",
>                        plongest (gdbarch->long_bit));
>    fprintf_unfiltered (file,
> @@ -5008,6 +5015,30 @@ set_gdbarch_valid_disassembler_options (struct gdbarch *gdbarch,
>    gdbarch->valid_disassembler_options = valid_disassembler_options;
>  }
>
> +int
> +gdbarch_lk_init_private_p (struct gdbarch *gdbarch)
> +{
> +  gdb_assert (gdbarch != NULL);
> +  return gdbarch->lk_init_private != NULL;
> +}
> +
> +void
> +gdbarch_lk_init_private (struct gdbarch *gdbarch)
> +{
> +  gdb_assert (gdbarch != NULL);
> +  gdb_assert (gdbarch->lk_init_private != NULL);
> +  if (gdbarch_debug >= 2)
> +    fprintf_unfiltered (gdb_stdlog, "gdbarch_lk_init_private called\n");
> +  gdbarch->lk_init_private (gdbarch);
> +}
> +
> +void
> +set_gdbarch_lk_init_private (struct gdbarch *gdbarch,
> +                             gdbarch_lk_init_private_ftype lk_init_private)
> +{
> +  gdbarch->lk_init_private = lk_init_private;
> +}
> +
>
>  /* Keep a registry of per-architecture data-pointers required by GDB
>     modules.  */
> diff --git a/gdb/gdbarch.h b/gdb/gdbarch.h
> index 34f82a7..c03bf00 100644
> --- a/gdb/gdbarch.h
> +++ b/gdb/gdbarch.h
> @@ -1553,6 +1553,13 @@ extern void set_gdbarch_disassembler_options (struct gdbarch *gdbarch, char ** d
>
>  extern const disasm_options_t * gdbarch_valid_disassembler_options (struct gdbarch *gdbarch);
>  extern void set_gdbarch_valid_disassembler_options (struct gdbarch *gdbarch, const disasm_options_t * valid_disassembler_options);
> +/* Initiate architecture dependent private data for the linux-kernel target. */
> +
> +extern int gdbarch_lk_init_private_p (struct gdbarch *gdbarch);
> +
> +typedef void (gdbarch_lk_init_private_ftype) (struct gdbarch *gdbarch);
> +extern void gdbarch_lk_init_private (struct gdbarch *gdbarch);
> +extern void set_gdbarch_lk_init_private (struct gdbarch *gdbarch, gdbarch_lk_init_private_ftype *lk_init_private);
>
>  /* Definition for an unknown syscall, used basically in error-cases.  */
>  #define UNKNOWN_SYSCALL (-1)
> diff --git a/gdb/gdbarch.sh b/gdb/gdbarch.sh
> index 39b1f94..cad45d1 100755
> --- a/gdb/gdbarch.sh
> +++ b/gdb/gdbarch.sh
> @@ -1167,6 +1167,10 @@ m:int:addressable_memory_unit_size:void:::default_addressable_memory_unit_size::
>  v:char **:disassembler_options:::0:0::0:pstring_ptr (gdbarch->disassembler_options)
>  v:const disasm_options_t *:valid_disassembler_options:::0:0::0:host_address_to_string (gdbarch->valid_disassembler_options)
>
> +# Initialize architecture dependent private data for the linux-kernel
> +# target.
> +M:void:lk_init_private:void:
> +
>  EOF
>  }
>
> diff --git a/gdb/lk-lists.c b/gdb/lk-lists.c
> new file mode 100644
> index 0000000..55d11bd
> --- /dev/null
> +++ b/gdb/lk-lists.c
> @@ -0,0 +1,47 @@
> +/* Iterators for internal data structures of the Linux kernel.
> +
> +   Copyright (C) 2016 Free Software Foundation, Inc.
> +
> +   This file is part of GDB.
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +#include "defs.h"
> +
> +#include "inferior.h"
> +#include "lk-lists.h"
> +#include "lk-low.h"
> +
> +/* Returns next entry from struct list_head CURR while iterating field
> +   SNAME->FNAME.  */
> +
> +CORE_ADDR
> +lk_list_head_next (CORE_ADDR curr, const char *sname, const char *fname)
> +{
> +  CORE_ADDR next, next_prev;
> +
> +  /* We must always assume that the data we handle is corrupted.  Thus use
> +     curr->next->prev == curr as sanity check.  */
> +  next = lk_read_addr (curr + LK_OFFSET (list_head, next));
> +  next_prev = lk_read_addr (next + LK_OFFSET (list_head, prev));
> +
> +  if (!curr || curr != next_prev)
> +    {
> +      error (_("Memory corruption detected while iterating list_head at "\
> +              "0x%s belonging to list %s->%s."),
> +            phex (curr, lk_builtin_type_size (unsigned_long)) , sname, fname);
> +    }
> +
> +  return next;
> +}
> diff --git a/gdb/lk-lists.h b/gdb/lk-lists.h
> new file mode 100644
> index 0000000..f9c2a85
> --- /dev/null
> +++ b/gdb/lk-lists.h
> @@ -0,0 +1,56 @@
> +/* Iterators for internal data structures of the Linux kernel.
> +
> +   Copyright (C) 2016 Free Software Foundation, Inc.
> +
> +   This file is part of GDB.
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +#ifndef __LK_LISTS_H__
> +#define __LK_LISTS_H__
> +
> +extern CORE_ADDR lk_list_head_next (CORE_ADDR curr, const char *sname,
> +                                   const char *fname);
> +
> +/* Iterator over field SNAME->FNAME of type struct list_head starting at
> +   address START of type struct list_head.  This iterator is intended to be
> +   used for lists initiated with macro LIST_HEAD (include/linux/list.h) in
> +   the kernel, i.e. lists that START is a global variable of type struct
> +   list_head and _not_ of type struct SNAME as the rest of the list.  Thus
> +   START will not be iterated over but only be used to start/terminate the
> +   iteration.  */
> +
> +#define lk_list_for_each(next, start, sname, fname)            \
> +  for ((next) = lk_list_head_next ((start), #sname, #fname);   \
> +       (next) != (start);                                      \
> +       (next) = lk_list_head_next ((next), #sname, #fname))
> +
> +/* Iterator over struct SNAME linked together via field SNAME->FNAME of type
> +   struct list_head starting at address START of type struct SNAME.  In
> +   contrast to the iterator above, START is a "full" member of the list and
> +   thus will be iterated over.  */
> +
> +#define lk_list_for_each_container(cont, start, sname, fname)  \
> +  CORE_ADDR _next;                                             \
> +  bool _first_loop = true;                                     \
> +  for ((cont) = (start),                                       \
> +       _next = (start) + LK_OFFSET (sname, fname);             \
> +                                                               \
> +       (cont) != (start) || _first_loop;                       \
> +                                                               \
> +       _next = lk_list_head_next (_next, #sname, #fname),      \
> +       (cont) = LK_CONTAINER_OF (_next, sname, fname),         \
> +       _first_loop = false)
> +
> +#endif /* __LK_LISTS_H__ */
> diff --git a/gdb/lk-low.c b/gdb/lk-low.c
> new file mode 100644
> index 0000000..768f228
> --- /dev/null
> +++ b/gdb/lk-low.c
> @@ -0,0 +1,833 @@
> +/* Basic Linux kernel support, architecture independent.
> +
> +   Copyright (C) 2016 Free Software Foundation, Inc.
> +
> +   This file is part of GDB.
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +#include "defs.h"
> +
> +#include "block.h"
> +#include "exceptions.h"
> +#include "frame.h"
> +#include "gdbarch.h"
> +#include "gdbcore.h"
> +#include "gdbthread.h"
> +#include "gdbtypes.h"
> +#include "inferior.h"
> +#include "lk-lists.h"
> +#include "lk-low.h"
> +#include "objfiles.h"
> +#include "observer.h"
> +#include "solib.h"
> +#include "target.h"
> +#include "value.h"
> +
> +#include <algorithm>
> +
> +struct target_ops *linux_kernel_ops = NULL;
> +
> +/* Initialize a private data entry for an address, where NAME is the name
> +   of the symbol, i.e. variable name in Linux, ALIAS the name used to
> +   retrieve the entry from hashtab, and SILENT a flag to determine if
> +   errors should be ignored.
> +
> +   Returns a pointer to the new entry.  In case of an error, either returns
> +   NULL (SILENT = TRUE) or throws an error (SILENT = FALSE).  If SILENT = TRUE
> +   the caller is responsible to check for errors.
> +
> +   Do not use directly, use LK_DECLARE_* macros defined in lk-low.h instead.  */
> +
> +struct lk_private_data *
> +lk_init_addr (const char *name, const char *alias, int silent)
> +{
> +  struct lk_private_data *data;
> +  struct bound_minimal_symbol bmsym;
> +  void **new_slot;
> +  void *old_slot;
> +
> +  if ((old_slot = lk_find (alias)) != NULL)
> +    return (struct lk_private_data *) old_slot;
> +
> +  bmsym = lookup_minimal_symbol (name, NULL, NULL);
> +
> +  if (bmsym.minsym == NULL)
> +    {
> +      if (!silent)
> +       error (_("Could not find address %s.  Aborting."), alias);
> +      return NULL;
> +    }
> +
> +  data = XCNEW (struct lk_private_data);
> +  data->alias = alias;
> +  data->data.addr = BMSYMBOL_VALUE_ADDRESS (bmsym);
> +
> +  new_slot = lk_find_slot (alias);
> +  *new_slot = data;
> +
> +  return data;
> +}
> +
> +/* Same as lk_init_addr but for structs.  */
> +
> +struct lk_private_data *
> +lk_init_struct (const char *name, const char *alias, int silent)
> +{
> +  struct lk_private_data *data;
> +  const struct block *global;
> +  const struct symbol *sym;
> +  struct type *type;
> +  void **new_slot;
> +  void *old_slot;
> +
> +  if ((old_slot = lk_find (alias)) != NULL)
> +    return (struct lk_private_data *) old_slot;
> +
> +  global = block_global_block(get_selected_block (0));
> +  sym = lookup_symbol (name, global, STRUCT_DOMAIN, NULL).symbol;
> +
> +  if (sym != NULL)
> +    {
> +      type = SYMBOL_TYPE (sym);
> +      goto out;
> +    }
> +
> +  /*  Chek for "typedef struct { ... } name;"-like definitions.  */
> +  sym = lookup_symbol (name, global, VAR_DOMAIN, NULL).symbol;
> +  if (sym == NULL)
> +    goto error;
> +
> +  type = check_typedef (SYMBOL_TYPE (sym));
> +
> +  if (TYPE_CODE (type) == TYPE_CODE_STRUCT)
> +    goto out;
> +
> +error:
> +  if (!silent)
> +    error (_("Could not find %s.  Aborting."), alias);
> +
> +  return NULL;
> +
> +out:
> +  data = XCNEW (struct lk_private_data);
> +  data->alias = alias;
> +  data->data.type = type;
> +
> +  new_slot = lk_find_slot (alias);
> +  *new_slot = data;
> +
> +  return data;
> +}
> +
> +/* Nearly the same as lk_init_addr, with the difference that two names are
> +   needed, i.e. the struct name S_NAME containing the field with name
> +   F_NAME.  */
> +
> +struct lk_private_data *
> +lk_init_field (const char *s_name, const char *f_name,
> +              const char *s_alias, const char *f_alias,
> +              int silent)
> +{
> +  struct lk_private_data *data;
> +  struct lk_private_data *parent;
> +  struct field *first, *last, *field;
> +  void **new_slot;
> +  void *old_slot;
> +
> +  if ((old_slot = lk_find (f_alias)) != NULL)
> +    return (struct lk_private_data *) old_slot;
> +
> +  parent = lk_find (s_alias);
> +  if (parent == NULL)
> +    {
> +      parent = lk_init_struct (s_name, s_alias, silent);
> +
> +      /* Only SILENT == true needed, as otherwise lk_init_struct would throw
> +        an error.  */
> +      if (parent == NULL)
> +       return NULL;
> +    }
> +
> +  first = TYPE_FIELDS (parent->data.type);
> +  last = first + TYPE_NFIELDS (parent->data.type);
> +  for (field = first; field < last; field ++)
> +    {
> +      if (streq (field->name, f_name))
> +       break;
> +    }
> +
> +  if (field == last)
> +    {
> +      if (!silent)
> +       error (_("Could not find field %s->%s.  Aborting."), s_alias, f_name);
> +      return NULL;
> +    }
> +
> +  data = XCNEW (struct lk_private_data);
> +  data->alias = f_alias;
> +  data->data.field = field;
> +
> +  new_slot = lk_find_slot (f_alias);
> +  *new_slot = data;
> +
> +  return data;
> +}
> +
> +/* Map cpu number CPU to the original PTID from target beneath.  */
> +
> +static ptid_t
> +lk_cpu_to_old_ptid (const int cpu)
> +{
> +  struct lk_ptid_map *ptid_map;
> +
> +  for (ptid_map = LK_PRIVATE->old_ptid; ptid_map;
> +       ptid_map = ptid_map->next)
> +    {
> +      if (ptid_map->cpu == cpu)
> +       return ptid_map->old_ptid;
> +    }
> +
> +  error (_("Could not map CPU %d to original PTID.  Aborting."), cpu);
> +}
> +
> +/* Helper functions to read and return basic types at a given ADDRess.  */
> +
> +/* Read and return the integer value at address ADDR.  */
> +
> +int
> +lk_read_int (CORE_ADDR addr)
> +{
> +  size_t int_size = lk_builtin_type_size (int);
> +  enum bfd_endian endian = gdbarch_byte_order (current_inferior ()->gdbarch);
> +  return read_memory_integer (addr, int_size, endian);
> +}
> +
> +/* Read and return the unsigned integer value at address ADDR.  */
> +
> +unsigned int
> +lk_read_uint (CORE_ADDR addr)
> +{
> +  size_t uint_size = lk_builtin_type_size (unsigned_int);
> +  enum bfd_endian endian = gdbarch_byte_order (current_inferior ()->gdbarch);
> +  return read_memory_integer (addr, uint_size, endian);
> +}
> +
> +/* Read and return the long integer value at address ADDR.  */
> +
> +LONGEST
> +lk_read_long (CORE_ADDR addr)
> +{
> +  size_t long_size = lk_builtin_type_size (long);
> +  enum bfd_endian endian = gdbarch_byte_order (current_inferior ()->gdbarch);
> +  return read_memory_integer (addr, long_size, endian);
> +}
> +
> +/* Read and return the unsigned long integer value at address ADDR.  */
> +
> +ULONGEST
> +lk_read_ulong (CORE_ADDR addr)
> +{
> +  size_t ulong_size = lk_builtin_type_size (unsigned_long);
> +  enum bfd_endian endian = gdbarch_byte_order (current_inferior ()->gdbarch);
> +  return read_memory_unsigned_integer (addr, ulong_size, endian);
> +}
> +
> +/* Read and return the address value at address ADDR.  */
> +
> +CORE_ADDR
> +lk_read_addr (CORE_ADDR addr)
> +{
> +  return (CORE_ADDR) lk_read_ulong (addr);
> +}
> +
> +/* Reads a bitmap at a given ADDRess of size SIZE (in bits). Allocates and
> +   returns an array of ulongs.  The caller is responsible to free the array
> +   after it is no longer needed.  */
> +
> +ULONGEST *
> +lk_read_bitmap (CORE_ADDR addr, size_t size)
> +{
> +  ULONGEST *bitmap;
> +  size_t ulong_size, len;
> +
> +  ulong_size = lk_builtin_type_size (unsigned_long);
> +  len = LK_DIV_ROUND_UP (size, ulong_size * LK_BITS_PER_BYTE);
> +  bitmap = XNEWVEC (ULONGEST, len);
> +
> +  for (size_t i = 0; i < len; i++)
> +    bitmap[i] = lk_read_ulong (addr + i * ulong_size);
> +
> +  return bitmap;
> +}
> +
> +/* Return the next set bit in bitmap BITMAP of size SIZE (in bits)
> +   starting from bit (index) BIT.  Return SIZE when the end of the bitmap
> +   was reached.  To iterate over all set bits use macro
> +   LK_BITMAP_FOR_EACH_SET_BIT defined in lk-low.h.  */
> +
> +size_t
> +lk_bitmap_find_next_bit (ULONGEST *bitmap, size_t size, size_t bit)
> +{
> +  size_t ulong_size, bits_per_ulong, elt;
> +
> +  ulong_size = lk_builtin_type_size (unsigned_long);
> +  bits_per_ulong = ulong_size * LK_BITS_PER_BYTE;
> +  elt = bit / bits_per_ulong;
> +
> +  while (bit < size)
> +    {

Will this be portable across endianess?

> +      /* FIXME: Explain why using lsb0 bit order.  */
> +      if (bitmap[elt] & (1UL << (bit % bits_per_ulong)))
> +       return bit;
> +
> +      bit++;
> +      if (bit % bits_per_ulong == 0)
> +       elt++;
> +    }
> +
> +  return size;
> +}
> +

lk_bitmap_hweight seems un-used.
I wonder if there is generic implementation available for this
function somewhere in binutils-gdb sources.
Can we use something like __builtin_popcount from GCC intrinsic?

> +/* Returns the Hamming weight, i.e. number of set bits, of bitmap BITMAP
> +   with size SIZE (in bits).  */
> +
> +size_t
> +lk_bitmap_hweight (ULONGEST *bitmap, size_t size)
> +{
> +  size_t ulong_size, bit, bits_per_ulong, elt, retval;
> +
> +  ulong_size = lk_builtin_type_size (unsigned_long);
> +  bits_per_ulong = ulong_size * LK_BITS_PER_BYTE;
> +  elt = bit = 0;
> +  retval = 0;
> +
> +  while (bit < size)
> +    {
> +      if (bitmap[elt] & (1 << bit % bits_per_ulong))
> +       retval++;
> +
> +      bit++;
> +      if (bit % bits_per_ulong == 0)
> +       elt++;
> +    }
> +
> +  return retval;
> +}
> +
> +/* Provide the per_cpu_offset of cpu CPU.  See comment in lk-low.h for
> +   details.  */
> +
> +CORE_ADDR
> +lk_get_percpu_offset (unsigned int cpu)
> +{
> +  size_t ulong_size = lk_builtin_type_size (unsigned_long);
> +  CORE_ADDR percpu_elt;
> +
> +  /* Give the architecture a chance to overwrite default behaviour.  */
> +  if (LK_HOOK->get_percpu_offset)
> +      return LK_HOOK->get_percpu_offset (cpu);
> +
> +  percpu_elt = LK_ADDR (__per_cpu_offset) + (ulong_size * cpu);
> +  return lk_read_addr (percpu_elt);
> +}
> +
> +
> +/* Test if a given task TASK is running.  See comment in lk-low.h for
> +   details.  */
> +
> +unsigned int
> +lk_task_running (CORE_ADDR task)
> +{
> +  ULONGEST *cpu_online_mask;
> +  size_t size;
> +  unsigned int cpu;
> +  struct cleanup *old_chain;
> +
> +  size = LK_BITMAP_SIZE (cpumask);
> +  cpu_online_mask = lk_read_bitmap (LK_ADDR (cpu_online_mask), size);
> +  old_chain = make_cleanup (xfree, cpu_online_mask);
> +
> +  LK_BITMAP_FOR_EACH_SET_BIT (cpu_online_mask, size, cpu)
> +    {
> +      CORE_ADDR rq;
> +      CORE_ADDR curr;
> +
> +      rq = LK_ADDR (runqueues) + lk_get_percpu_offset (cpu);
> +      curr = lk_read_addr (rq + LK_OFFSET (rq, curr));
> +
> +      if (curr == task)
> +       break;
> +    }
> +
> +  if (cpu == size)
> +    cpu = LK_CPU_INVAL;
> +
> +  do_cleanups (old_chain);
> +  return cpu;
> +}
> +
> +/* Update running tasks with information from struct rq->curr. */
> +
> +static void
> +lk_update_running_tasks ()
> +{
> +  ULONGEST *cpu_online_mask;
> +  size_t size;
> +  unsigned int cpu;
> +  struct cleanup *old_chain;
> +
> +  size = LK_BITMAP_SIZE (cpumask);
> +  cpu_online_mask = lk_read_bitmap (LK_ADDR (cpu_online_mask), size);
> +  old_chain = make_cleanup (xfree, cpu_online_mask);
> +
> +  LK_BITMAP_FOR_EACH_SET_BIT (cpu_online_mask, size, cpu)
> +    {
> +      struct thread_info *tp;
> +      CORE_ADDR rq, curr;
> +      LONGEST pid, inf_pid;
> +      ptid_t new_ptid, old_ptid;
> +
> +      rq = LK_ADDR (runqueues) + lk_get_percpu_offset (cpu);
> +      curr = lk_read_addr (rq + LK_OFFSET (rq, curr));
> +      pid = lk_read_int (curr + LK_OFFSET (task_struct, pid));
> +      inf_pid = current_inferior ()->pid;
> +
> +      new_ptid = ptid_build (inf_pid, pid, curr);
> +      old_ptid = lk_cpu_to_old_ptid (cpu); /* FIXME not suitable for
> +                                             running targets? */
> +
> +      tp = find_thread_ptid (old_ptid);
> +      if (tp && tp->state != THREAD_EXITED)
> +       thread_change_ptid (old_ptid, new_ptid);
> +    }
> +  do_cleanups (old_chain);
> +}
> +
> +/* Update sleeping tasks by walking the task_structs starting from
> +   init_task.  */
> +
> +static void
> +lk_update_sleeping_tasks ()
> +{
> +  CORE_ADDR init_task, task, thread;
> +  int inf_pid;
> +
> +  inf_pid = current_inferior ()->pid;
> +  init_task = LK_ADDR (init_task);
> +
> +  lk_list_for_each_container (task, init_task, task_struct, tasks)
> +    {
> +      lk_list_for_each_container (thread, task, task_struct, thread_group)
> +       {
> +         int pid;
> +         ptid_t ptid;
> +         struct thread_info *tp;
> +
> +         pid = lk_read_int (thread + LK_OFFSET (task_struct, pid));
> +         ptid = ptid_build (inf_pid, pid, thread);
> +
> +         tp = find_thread_ptid (ptid);
> +         if (tp == NULL || tp->state == THREAD_EXITED)
> +           add_thread (ptid);
> +       }
> +    }
> +}
> +
> +/* Function for targets to_update_thread_list hook.  */
> +
> +static void
> +lk_update_thread_list (struct target_ops *target)
> +{
> +  prune_threads ();
> +  lk_update_running_tasks ();
> +  lk_update_sleeping_tasks ();
> +}
> +
> +/* Function for targets to_fetch_registers hook.  */
> +
> +static void
> +lk_fetch_registers (struct target_ops *target,
> +                   struct regcache *regcache, int regnum)
> +{
> +  CORE_ADDR task;
> +  unsigned int cpu;
> +
> +  task = (CORE_ADDR) ptid_get_tid (regcache_get_ptid (regcache));
> +  cpu = lk_task_running (task);
> +
> +  /* Let the target beneath fetch registers of running tasks.  */
> +  if (cpu != LK_CPU_INVAL)
> +    {
> +      struct cleanup *old_inferior_ptid;
> +
> +      old_inferior_ptid = save_inferior_ptid ();
> +      inferior_ptid = lk_cpu_to_old_ptid (cpu);
> +      linux_kernel_ops->beneath->to_fetch_registers (target, regcache, regnum);
> +      do_cleanups (old_inferior_ptid);
> +    }
> +  else
> +    {
> +      struct gdbarch *gdbarch;
> +      unsigned int i;
> +
> +      LK_HOOK->get_registers (task, target, regcache, regnum);
> +
> +      /* Mark all registers not found as unavailable.  */
> +      gdbarch = get_regcache_arch (regcache);
> +      for (i = 0; i < gdbarch_num_regs (gdbarch); i++)
> +       {
> +         if (regcache_register_status (regcache, i) == REG_UNKNOWN)
> +           regcache_raw_supply (regcache, i, NULL);
> +       }
> +    }
> +}
> +

This function throws an error while compiling for arm-linux target on
x86_64 host.

lk-low.c: In function ‘void init_linux_kernel_ops()’:
lk-low.c:812:20: error: invalid conversion from ‘char*
(*)(target_ops*, ptid_t)’ to ‘const char* (*)(target_ops*, ptid_t)’
[-fpermissive]
   t->to_pid_to_str = lk_pid_to_str;


> +/* Function for targets to_pid_to_str hook.  Marks running tasks with an
> +   asterisk "*".  */
> +
> +static char *
> +lk_pid_to_str (struct target_ops *target, ptid_t ptid)
> +{
> +  static char buf[64];
> +  long pid;
> +  CORE_ADDR task;
> +
> +  pid = ptid_get_lwp (ptid);
> +  task = (CORE_ADDR) ptid_get_tid (ptid);
> +
> +  xsnprintf (buf, sizeof (buf), "PID: %5li%s, 0x%s",
> +            pid, ((lk_task_running (task) != LK_CPU_INVAL) ? "*" : ""),
> +            phex (task, lk_builtin_type_size (unsigned_long)));
> +
> +  return buf;
> +}
> +
> +/* Function for targets to_thread_name hook.  */
> +
> +static const char *
> +lk_thread_name (struct target_ops *target, struct thread_info *ti)
> +{
> +  static char buf[LK_TASK_COMM_LEN + 1];
> +  char tmp[LK_TASK_COMM_LEN + 1];
> +  CORE_ADDR task, comm;
> +  size_t size;
> +
> +  size = std::min ((unsigned int) LK_TASK_COMM_LEN,
> +                  LK_ARRAY_LEN(LK_FIELD (task_struct, comm)));
> +
> +  task = (CORE_ADDR) ptid_get_tid (ti->ptid);
> +  comm = task + LK_OFFSET (task_struct, comm);
> +  read_memory (comm, (gdb_byte *) tmp, size);
> +
> +  xsnprintf (buf, sizeof (buf), "%-16s", tmp);
> +
> +  return buf;
> +}
> +
> +/* Functions to initialize and free target_ops and its private data.  As well
> +   as functions for targets to_open/close/detach hooks.  */
> +
> +/* Check if OBFFILE is a Linux kernel.  */
> +
> +static int
> +lk_is_linux_kernel (struct objfile *objfile)
> +{
> +  int ok = 0;
> +
> +  if (objfile == NULL || !(objfile->flags & OBJF_MAINLINE))
> +    return 0;
> +
> +  ok += lookup_minimal_symbol ("linux_banner", NULL, objfile).minsym != NULL;
> +  ok += lookup_minimal_symbol ("_stext", NULL, objfile).minsym != NULL;
> +  ok += lookup_minimal_symbol ("_etext", NULL, objfile).minsym != NULL;
> +
> +  return (ok > 2);
> +}
> +
> +/* Initialize struct lk_private.  */
> +
> +static void
> +lk_init_private ()
> +{
> +  linux_kernel_ops->to_data = XCNEW (struct lk_private);
> +  LK_PRIVATE->hooks = XCNEW (struct lk_private_hooks);
> +  LK_PRIVATE->data = htab_create_alloc (31, (htab_hash) lk_hash_private_data,
> +                                       (htab_eq) lk_private_data_eq, NULL,
> +                                       xcalloc, xfree);
> +}
> +
> +/* Initialize architecture independent private data.  Must be called
> +   _after_ symbol tables were initialized.  */
> +
> +static void
> +lk_init_private_data ()
> +{
> +  if (LK_PRIVATE->data != NULL)
> +    htab_empty (LK_PRIVATE->data);
> +

Nice to have comments for all structs/fields below, a kernel tree
reference may be?

> +  LK_DECLARE_FIELD (task_struct, tasks);
> +  LK_DECLARE_FIELD (task_struct, pid);
> +  LK_DECLARE_FIELD (task_struct, tgid);
> +  LK_DECLARE_FIELD (task_struct, thread_group);
> +  LK_DECLARE_FIELD (task_struct, comm);
> +  LK_DECLARE_FIELD (task_struct, thread);
> +
> +  LK_DECLARE_FIELD (list_head, next);
> +  LK_DECLARE_FIELD (list_head, prev);
> +
> +  LK_DECLARE_FIELD (rq, curr);
> +
> +  LK_DECLARE_FIELD (cpumask, bits);
> +
> +  LK_DECLARE_ADDR (init_task);
> +  LK_DECLARE_ADDR (runqueues);
> +  LK_DECLARE_ADDR (__per_cpu_offset);
> +  LK_DECLARE_ADDR (init_mm);
> +
> +  LK_DECLARE_ADDR_ALIAS (__cpu_online_mask, cpu_online_mask);  /* linux 4.5+ */
> +  LK_DECLARE_ADDR_ALIAS (cpu_online_bits, cpu_online_mask);    /* linux -4.4 */
> +  if (LK_ADDR (cpu_online_mask) == -1)
> +    error (_("Could not find address cpu_online_mask.  Aborting."));
> +}
> +
> +/* Frees the cpu to old ptid map.  */
> +
> +static void
> +lk_free_ptid_map ()
> +{
> +  while (LK_PRIVATE->old_ptid)
> +    {
> +      struct lk_ptid_map *tmp;
> +
> +      tmp = LK_PRIVATE->old_ptid;
> +      LK_PRIVATE->old_ptid = tmp->next;
> +      XDELETE (tmp);
> +    }
> +}
> +
> +/* Initialize the cpu to old ptid map.  Prefer the arch dependent
> +   map_running_task_to_cpu hook if provided, else assume that the PID used
> +   by target beneath is the same as in task_struct PID task_struct.  See
> +   comment on lk_ptid_map in lk-low.h for details.  */
> +
> +static void
> +lk_init_ptid_map ()
> +{
> +  struct thread_info *ti;
> +  ULONGEST *cpu_online_mask;
> +  size_t size;
> +  unsigned int cpu;
> +  struct cleanup *old_chain;
> +
> +  if (LK_PRIVATE->old_ptid != NULL)
> +    lk_free_ptid_map ();
> +
> +  size = LK_BITMAP_SIZE (cpumask);
> +  cpu_online_mask = lk_read_bitmap (LK_ADDR (cpu_online_mask), size);
> +  old_chain = make_cleanup (xfree, cpu_online_mask);
> +
> +  ALL_THREADS (ti)
> +    {
> +      struct lk_ptid_map *ptid_map = XCNEW (struct lk_ptid_map);
> +      CORE_ADDR rq, curr;
> +      int pid;
> +
> +      /* Give the architecture a chance to overwrite default behaviour.  */
> +      if (LK_HOOK->map_running_task_to_cpu)
> +       {
> +         ptid_map->cpu = LK_HOOK->map_running_task_to_cpu (ti);
> +       }
> +      else
> +       {
> +         LK_BITMAP_FOR_EACH_SET_BIT (cpu_online_mask, size, cpu)
> +           {
> +             rq = LK_ADDR (runqueues) + lk_get_percpu_offset (cpu);
> +             curr = lk_read_addr (rq + LK_OFFSET (rq, curr));
> +             pid = lk_read_int (curr + LK_OFFSET (task_struct, pid));
> +
> +             if (pid == ptid_get_lwp (ti->ptid))
> +               {
> +                 ptid_map->cpu = cpu;
> +                 break;
> +               }
> +           }
> +         if (cpu == size)
> +           error (_("Could not map thread with pid %d, lwp %lu to a cpu."),
> +                  ti->ptid.pid, ti->ptid.lwp);

Accessing pid and lwp fields directly is not recommended. May be use
something like
         error (_("Could not map thread with pid %d, lwp %ld to a cpu."),
               ptid_get_pid (ti->ptid), ptid_get_lwp (ti->ptid));


> +       }
> +      ptid_map->old_ptid = ti->ptid;
> +      ptid_map->next = LK_PRIVATE->old_ptid;
> +      LK_PRIVATE->old_ptid = ptid_map;
> +    }
> +
> +  do_cleanups (old_chain);
> +}
> +
> +/* Initializes all private data and pushes the linux kernel target, if not
> +   already done.  */
> +
> +static void
> +lk_try_push_target ()
> +{
> +  struct gdbarch *gdbarch;
> +
> +  gdbarch = current_inferior ()->gdbarch;
> +  if (!(gdbarch && gdbarch_lk_init_private_p (gdbarch)))
> +    error (_("Linux kernel debugging not supported on %s."),
> +          gdbarch_bfd_arch_info (gdbarch)->printable_name);
> +
> +  lk_init_private ();
> +  lk_init_private_data ();
> +  gdbarch_lk_init_private (gdbarch);
> +  /* Check for required arch hooks.  */
> +  gdb_assert (LK_HOOK->get_registers);
> +
> +  lk_init_ptid_map ();
> +  lk_update_thread_list (linux_kernel_ops);
> +
> +  if (!target_is_pushed (linux_kernel_ops))
> +    push_target (linux_kernel_ops);
> +}
> +
> +/* Function for targets to_open hook.  */
> +
> +static void
> +lk_open (const char *args, int from_tty)
> +{
> +  struct objfile *objfile;
> +
> +  if (target_is_pushed (linux_kernel_ops))
> +    {
> +      printf_unfiltered (_("Linux kernel target already pushed.  Aborting\n"));
> +      return;
> +    }
> +
> +  for (objfile = current_program_space->objfiles; objfile;
> +       objfile = objfile->next)
> +    {
> +      if (lk_is_linux_kernel (objfile)
> +         && ptid_get_pid (inferior_ptid) != 0)
> +       {
> +         lk_try_push_target ();
> +         return;
> +       }
> +    }
> +  printf_unfiltered (_("Could not find a valid Linux kernel object file.  "
> +                      "Aborting.\n"));
> +}
> +
> +/* Function for targets to_close hook.  Deletes all private data.  */
> +
> +static void
> +lk_close (struct target_ops *ops)
> +{
> +  htab_delete (LK_PRIVATE->data);
> +  lk_free_ptid_map ();
> +  XDELETE (LK_PRIVATE->hooks);
> +
> +  XDELETE (LK_PRIVATE);
> +  linux_kernel_ops->to_data = NULL;
> +}
> +
> +/* Function for targets to_detach hook.  */
> +
> +static void
> +lk_detach (struct target_ops *t, const char *args, int from_tty)
> +{
> +  struct target_ops *beneath = linux_kernel_ops->beneath;
> +
> +  unpush_target (linux_kernel_ops);
> +  reinit_frame_cache ();
> +  if (from_tty)
> +    printf_filtered (_("Linux kernel target detached.\n"));
> +
> +  beneath->to_detach (beneath, args, from_tty);
> +}
> +
> +/* Function for new objfile observer.  */
> +
> +static void
> +lk_observer_new_objfile (struct objfile *objfile)
> +{
> +  if (lk_is_linux_kernel (objfile)
> +      && ptid_get_pid (inferior_ptid) != 0)
> +    lk_try_push_target ();
> +}
> +
> +/* Function for inferior created observer.  */
> +
> +static void
> +lk_observer_inferior_created (struct target_ops *ops, int from_tty)
> +{
> +  struct objfile *objfile;
> +
> +  if (ptid_get_pid (inferior_ptid) == 0)
> +    return;
> +
> +  for (objfile = current_inferior ()->pspace->objfiles; objfile;
> +       objfile = objfile->next)
> +    {
> +      if (lk_is_linux_kernel (objfile))
> +       {
> +         lk_try_push_target ();
> +         return;
> +       }
> +    }
> +}
> +
> +/* Initialize linux kernel target.  */
> +
> +static void
> +init_linux_kernel_ops (void)
> +{
> +  struct target_ops *t;
> +
> +  if (linux_kernel_ops != NULL)
> +    return;
> +
> +  t = XCNEW (struct target_ops);
> +  t->to_shortname = "linux-kernel";
> +  t->to_longname = "linux kernel support";
> +  t->to_doc = "Adds support to debug the Linux kernel";
> +
> +  /* set t->to_data = struct lk_private in lk_init_private.  */
> +
> +  t->to_open = lk_open;
> +  t->to_close = lk_close;
> +  t->to_detach = lk_detach;
> +  t->to_fetch_registers = lk_fetch_registers;
> +  t->to_update_thread_list = lk_update_thread_list;
> +  t->to_pid_to_str = lk_pid_to_str;
> +  t->to_thread_name = lk_thread_name;
> +
> +  t->to_stratum = thread_stratum;
> +  t->to_magic = OPS_MAGIC;
> +
> +  linux_kernel_ops = t;
> +
> +  add_target (t);
> +}
> +
> +/* Provide a prototype to silence -Wmissing-prototypes.  */
> +extern initialize_file_ftype _initialize_linux_kernel;
> +
> +void
> +_initialize_linux_kernel (void)
> +{
> +  init_linux_kernel_ops ();
> +
> +  observer_attach_new_objfile (lk_observer_new_objfile);
> +  observer_attach_inferior_created (lk_observer_inferior_created);
> +}
> diff --git a/gdb/lk-low.h b/gdb/lk-low.h
> new file mode 100644
> index 0000000..292ef97
> --- /dev/null
> +++ b/gdb/lk-low.h
> @@ -0,0 +1,310 @@
> +/* Basic Linux kernel support, architecture independent.
> +
> +   Copyright (C) 2016 Free Software Foundation, Inc.
> +
> +   This file is part of GDB.
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +#ifndef __LK_LOW_H__
> +#define __LK_LOW_H__
> +
> +#include "target.h"
> +
> +extern struct target_ops *linux_kernel_ops;
> +
> +/* Copy constants defined in Linux kernel.  */
> +#define LK_TASK_COMM_LEN 16
> +#define LK_BITS_PER_BYTE 8
> +
> +/* Definitions used in linux kernel target.  */
> +#define LK_CPU_INVAL -1U
> +
> +/* Private data structs for this target.  */
> +/* Forward declarations.  */
> +struct lk_private_hooks;
> +struct lk_ptid_map;
> +
> +/* Short hand access to private data.  */
> +#define LK_PRIVATE ((struct lk_private *) linux_kernel_ops->to_data)
> +#define LK_HOOK (LK_PRIVATE->hooks)
> +
> +struct lk_private
> +{
> +  /* Hashtab for needed addresses, structs and fields.  */
> +  htab_t data;
> +
> +  /* Linked list to map between cpu number and original ptid from target
> +     beneath.  */
> +  struct lk_ptid_map *old_ptid;
> +
> +  /* Hooks for architecture dependent functions.  */
> +  struct lk_private_hooks *hooks;
> +};
> +
> +/* We use the following convention for PTIDs:
> +
> +   ptid->pid = inferiors PID
> +   ptid->lwp = PID from task_stuct
> +   ptid->tid = address of task_struct
> +
> +   The task_structs address as TID has two reasons.  First, we need it quite
> +   often and there is no other reasonable way to pass it down.  Second, it
> +   helps us to distinguish swapper tasks as they all have PID = 0.
> +
> +   Furthermore we cannot rely on the target beneath to use the same PID as the
> +   task_struct. Thus we need a mapping between our PTID and the PTID of the
> +   target beneath. Otherwise it is impossible to pass jobs, e.g. fetching
> +   registers of running tasks, to the target beneath.  */
> +
> +/* Private data struct to map between our and the target beneath PTID.  */
> +
> +struct lk_ptid_map
> +{
> +  struct lk_ptid_map *next;
> +  unsigned int cpu;
> +  ptid_t old_ptid;
> +};
> +
> +/* Private data struct to be stored in hashtab.  */
> +
> +struct lk_private_data
> +{
> +  const char *alias;
> +
> +  union
> +  {
> +    CORE_ADDR addr;
> +    struct type *type;
> +    struct field *field;
> +  } data;
> +};
> +
> +/* Wrapper for htab_hash_string to work with our private data.  */
> +
> +static inline hashval_t
> +lk_hash_private_data (const struct lk_private_data *entry)
> +{
> +  return htab_hash_string (entry->alias);
> +}
> +
> +/* Function for htab_eq to work with our private data.  */
> +
> +static inline int
> +lk_private_data_eq (const struct lk_private_data *entry,
> +                   const struct lk_private_data *element)
> +{
> +  return streq (entry->alias, element->alias);
> +}
> +
> +/* Wrapper for htab_find_slot to work with our private data.  Do not use
> +   directly, use the macros below instead.  */
> +
> +static inline void **
> +lk_find_slot (const char *alias)
> +{
> +  const struct lk_private_data dummy = { alias };
> +  return htab_find_slot (LK_PRIVATE->data, &dummy, INSERT);
> +}
> +
> +/* Wrapper for htab_find to work with our private data.  Do not use
> +   directly, use the macros below instead.  */
> +
> +static inline struct lk_private_data *
> +lk_find (const char *alias)
> +{
> +  const struct lk_private_data dummy = { alias };
> +  return (struct lk_private_data *) htab_find (LK_PRIVATE->data, &dummy);
> +}
> +
> +/* Functions to initialize private data.  Do not use directly, use the
> +   macros below instead.  */
> +
> +extern struct lk_private_data *lk_init_addr (const char *name,
> +                                            const char *alias, int silent);
> +extern struct lk_private_data *lk_init_struct (const char *name,
> +                                              const char *alias, int silent);
> +extern struct lk_private_data *lk_init_field (const char *s_name,
> +                                             const char *f_name,
> +                                             const char *s_alias,
> +                                             const char *f_alias, int silent);
> +
> +/* The names we use to store our private data in the hashtab.  */
> +
> +#define LK_STRUCT_ALIAS(s_name) ("struct " #s_name)
> +#define LK_FIELD_ALIAS(s_name, f_name) (#s_name " " #f_name)
> +
> +/* Macros to initiate addresses and fields, where (S_/F_)NAME is the variables
> +   name as used in Linux.  LK_DECLARE_FIELD also initializes the corresponding
> +   struct entry.  Throws an error, if no symbol with the given name is found.
> + */
> +
> +#define LK_DECLARE_ADDR(name) \
> +  lk_init_addr (#name, #name, 0)
> +#define LK_DECLARE_FIELD(s_name, f_name) \
> +  lk_init_field (#s_name, #f_name, LK_STRUCT_ALIAS (s_name), \
> +                LK_FIELD_ALIAS (s_name, f_name), 0)
> +
> +/* Same as LK_DECLARE_*, but returns NULL instead of throwing an error if no
> +   symbol was found.  The caller is responsible to check for possible errors.
> + */
> +
> +#define LK_DECLARE_ADDR_SILENT(name) \
> +  lk_init_addr (#name, #name, 1)
> +#define LK_DECLARE_FIELD_SILENT(s_name, f_name) \
> +  lk_init_field (#s_name, #f_name, LK_STRUCT_ALIAS (s_name), \
> +                LK_FIELD_ALIAS (s_name, f_name), 1)
> +
> +/* Same as LK_DECLARE_*_SILENT, but allows you to give an ALIAS name.  If used
> +   for a struct, the struct has to be declared explicitly _before_ any of its
> +   fields.  They are ment to be used, when a variable in the kernel was simply
> +   renamed (at least from our point of view).  The caller is responsible to
> +   check for possible errors.  */
> +
> +#define LK_DECLARE_ADDR_ALIAS(name, alias) \
> +  lk_init_addr (#name, #alias, 1)
> +#define LK_DECLARE_STRUCT_ALIAS(s_name, alias) \
> +  lk_init_struct (#s_name, LK_STRUCT_ALIAS (alias), 1)
> +#define LK_DECLARE_FIELD_ALIAS(s_alias, f_name, f_alias) \
> +  lk_init_field (NULL, #f_name, LK_STRUCT_ALIAS (s_alias), \
> +                LK_FIELD_ALIAS (s_alias, f_alias), 1)
> +
> +/* Macros to retrieve private data from hashtab. Returns NULL (-1) if no entry
> +   with the given ALIAS exists. The caller only needs to check for possible
> +   errors if not done so at initialization.  */
> +
> +#define LK_ADDR(alias) \
> +  (lk_find (#alias) ? (lk_find (#alias))->data.addr : -1)
> +#define LK_STRUCT(alias) \
> +  (lk_find (LK_STRUCT_ALIAS (alias)) \
> +   ? (lk_find (LK_STRUCT_ALIAS (alias)))->data.type \
> +   : NULL)
> +#define LK_FIELD(s_alias, f_alias) \
> +  (lk_find (LK_FIELD_ALIAS (s_alias, f_alias)) \
> +   ? (lk_find (LK_FIELD_ALIAS (s_alias, f_alias)))->data.field \
> +   : NULL)
> +
> +
> +/* Definitions for architecture dependent hooks.  */
> +/* Hook to read registers from the target and supply their content
> +   to the regcache.  */
> +typedef void (*lk_hook_get_registers) (CORE_ADDR task,
> +                                      struct target_ops *target,
> +                                      struct regcache *regcache,
> +                                      int regnum);
> +
> +/* Hook to return the per_cpu_offset of cpu CPU.  Only architectures that
> +   do not use the __per_cpu_offset array to determin the offset have to
> +   supply this hook.  */

^Typo in comment.
Also if its not too much trouble can you kindly put Linux kernel
source tree references like
__per_cpu_offset: Linux/include/asm-generic/percpu.h 4.10. in comments.

> +typedef CORE_ADDR (*lk_hook_get_percpu_offset) (unsigned int cpu);
> +
> +/* Hook to map a running task to a logical CPU.  Required if the target
> +   beneath uses a different PID as struct rq.  */
> +typedef unsigned int (*lk_hook_map_running_task_to_cpu) (struct thread_info *ti);
> +
> +struct lk_private_hooks
> +{
> +  /* required */
> +  lk_hook_get_registers get_registers;
> +
> +  /* optional, required if __per_cpu_offset array is not used to determine
> +     offset.  */
> +  lk_hook_get_percpu_offset get_percpu_offset;
> +
> +  /* optional, required if the target beneath uses a different PID as struct
> +     rq.  */
> +  lk_hook_map_running_task_to_cpu map_running_task_to_cpu;
> +};
> +
> +/* Helper functions to read and return a value at a given ADDRess.  */
> +extern int lk_read_int (CORE_ADDR addr);
> +extern unsigned int lk_read_uint (CORE_ADDR addr);
> +extern LONGEST lk_read_long (CORE_ADDR addr);
> +extern ULONGEST lk_read_ulong (CORE_ADDR addr);
> +extern CORE_ADDR lk_read_addr (CORE_ADDR addr);
> +
> +/* Reads a bitmap at a given ADDRess of size SIZE (in bits). Allocates and
> +   returns an array of ulongs.  The caller is responsible to free the array
> +   after it is no longer needed.  */
> +extern ULONGEST *lk_read_bitmap (CORE_ADDR addr, size_t size);
> +
> +/* Walks the bitmap BITMAP of size SIZE from bit (index) BIT.
> +   Returns the index of the next set bit or SIZE, when the end of the bitmap
> +   was reached.  To iterate over all set bits use macro
> +   LK_BITMAP_FOR_EACH_SET_BIT defined below.  */
> +extern size_t lk_bitmap_find_next_bit (ULONGEST *bitmap, size_t bit,
> +                                      size_t size);
> +#define LK_BITMAP_FOR_EACH_SET_BIT(bitmap, size, bit)                  \
> +  for ((bit) = lk_bitmap_find_next_bit ((bitmap), (size), 0);          \
> +       (bit) < (size);                                                 \
> +       (bit) = lk_bitmap_find_next_bit ((bitmap), (size), (bit) + 1))
> +
> +/* Returns the size of BITMAP in bits.  */
> +#define LK_BITMAP_SIZE(bitmap) \
> +  (FIELD_SIZE (LK_FIELD (bitmap, bits)) * LK_BITS_PER_BYTE)
> +
> +/* Returns the Hamming weight, i.e. number of set bits, of bitmap BITMAP with
> +   size SIZE (in bits).  */
> +extern size_t lk_bitmap_hweight (ULONGEST *bitmap, size_t size);
> +
> +
> +/* Short hand access to current gdbarchs builtin types and their
> +   size (in byte).  For TYPE replace spaces " " by underscore "_", e.g.
> +   "unsigned int" => "unsigned_int".  */
> +#define lk_builtin_type(type)                                  \
> +  (builtin_type (current_inferior ()->gdbarch)->builtin_##type)
> +#define lk_builtin_type_size(type)             \
> +  (lk_builtin_type (type)->length)
> +
> +/* If field FIELD is an array returns its length (in #elements).  */
> +#define LK_ARRAY_LEN(field)                    \
> +  (FIELD_SIZE (field) / FIELD_TARGET_SIZE (field))
> +
> +/* Short hand access to the offset of field F_NAME in struct S_NAME.  */
> +#define LK_OFFSET(s_name, f_name)              \
> +  (FIELD_OFFSET (LK_FIELD (s_name, f_name)))
> +
> +/* Returns the container of field FNAME of struct SNAME located at address
> +   ADDR.  */
> +#define LK_CONTAINER_OF(addr, sname, fname)            \
> +  ((addr) - LK_OFFSET (sname, fname))
> +
> +/* Divides numinator N by demoniator D and rounds up the result.  */

^ Spell check above.

> +#define LK_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
> +
> +
> +/* Additional access macros to fields in the style of gdbtypes.h */
> +/* Returns the size of field FIELD (in bytes). If FIELD is an array returns
> +   the size of the whole array.  */
> +#define FIELD_SIZE(field)                      \
> +  TYPE_LENGTH (check_typedef (FIELD_TYPE (*field)))
> +
> +/* Returns the size of the target type of field FIELD (in bytes).  If FIELD is
> +   an array returns the size of its elements.  */
> +#define FIELD_TARGET_SIZE(field)               \
> +  TYPE_LENGTH (check_typedef (TYPE_TARGET_TYPE (FIELD_TYPE (*field))))
> +
> +/* Returns the offset of field FIELD (in bytes).  */
> +#define FIELD_OFFSET(field)                    \
> +  (FIELD_BITPOS (*field) / TARGET_CHAR_BIT)
> +
> +/* Provides the per_cpu_offset of cpu CPU.  If the architecture
> +   provides a get_percpu_offset hook, the call is passed to it.  Otherwise
> +   returns the __per_cpu_offset[CPU] element.  */
> +extern CORE_ADDR lk_get_percpu_offset (unsigned int cpu);
> +
> +/* Tests if a given task TASK is running. Returns either the cpu-id
> +   if running or LK_CPU_INVAL if not.  */
> +extern unsigned int lk_task_running (CORE_ADDR task);
> +#endif /* __LK_LOW_H__ */
> --
> 2.8.4
>
  
Omair Javaid April 20, 2017, 11:08 a.m. UTC | #2
Hi Philipp and Andreas,

I have some further comments on this patch specifically about copying
task_struct->pid into ptid->lwp and using task_struct address as tid.

I see that we are overriding lwp, tid which any target beneath might
be using differently.

So suggestion about storing task_struct->pid or task_struct address is
to use private_thread_info in binutils-gdb/gdb/gdbthread.h for this
information.

I also have reservation about use of old_ptid naming in struct
lk_private and > +struct lk_ptid_map.

old_ptid naming is a little confusing kindly choose a distinguishable
name for old_ptid varibles in both lk_private and lk_ptid_map.

Further Here's an implementation of bitmap_weight function from linux
kernel. Kindly see if your implementation can be improved and moved to
a generic area in gdb.

 10 int __bitmap_weight(const unsigned long *bitmap, int bits)
 11 {
 12         int k, w = 0, lim = bits/BITS_PER_LONG;
 13
 14         for (k = 0; k < lim; k++)
 15                 w += hweight_long(bitmap[k]);
 16
 17         if (bits % BITS_PER_LONG)
 18                 w += hweight_long(bitmap[k] & BITMAP_LAST_WORD_MASK(bits));
 19
 20         return w;
 21 }

Thanks!

--
Omair.

On 16 March 2017 at 21:57, Philipp Rudo <prudo@linux.vnet.ibm.com> wrote:
> This patch implements a basic target_ops for Linux kernel support. In
> particular it models Linux tasks as GDB threads such that you are able to
> change to a given thread, get backtraces, disassemble the current frame
> etc..
>
> Currently the target_ops is designed only to work with static targets, i.e.
> dumps. Thus it lacks implementation for hooks like to_wait, to_resume or
> to_store_registers. Furthermore the mapping between a CPU and the
> task_struct of the running task is only be done once at initialization. See
> cover letter for a detailed discussion.
>
> Nevertheless i made some design decisions different to Peter [1] which are
> worth discussing. Especially storing the private data in a htab (or
> std::unordered_map if i had the time...) instead of global variables makes
> the code much nicer and less memory consuming.
>
> [1] https://sourceware.org/ml/gdb-patches/2016-12/msg00382.html
>
> gdb/ChangeLog:
>
>     * gdbarch.sh (lk_init_private): New hook.
>     * gdbarch.h: Regenerated.
>     * gdbarch.c: Regenerated.
>     * lk-low.h: New file.
>     * lk-low.c: New file.
>     * lk-lists.h: New file.
>     * lk-lists.c: New file.
>     * Makefile.in (SFILES, ALLDEPFILES): Add lk-low.c and lk-lists.c.
>     (HFILES_NO_SRCDIR): Add lk-low.h and lk-lists.h.
>     (ALL_TARGET_OBS): Add lk-low.o and lk-lists.o.
>     * configure.tgt (lk_target_obs): New variable with object files for Linux
>       kernel support.
>       (s390*-*-linux*): Add lk_target_obs.
> ---
>  gdb/Makefile.in   |   8 +
>  gdb/configure.tgt |   6 +-
>  gdb/gdbarch.c     |  31 ++
>  gdb/gdbarch.h     |   7 +
>  gdb/gdbarch.sh    |   4 +
>  gdb/lk-lists.c    |  47 +++
>  gdb/lk-lists.h    |  56 ++++
>  gdb/lk-low.c      | 833 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  gdb/lk-low.h      | 310 ++++++++++++++++++++
>  9 files changed, 1301 insertions(+), 1 deletion(-)
>  create mode 100644 gdb/lk-lists.c
>  create mode 100644 gdb/lk-lists.h
>  create mode 100644 gdb/lk-low.c
>  create mode 100644 gdb/lk-low.h
>
> diff --git a/gdb/Makefile.in b/gdb/Makefile.in
> index 0818742..9387c66 100644
> --- a/gdb/Makefile.in
> +++ b/gdb/Makefile.in
> @@ -817,6 +817,8 @@ ALL_TARGET_OBS = \
>         iq2000-tdep.o \
>         linux-record.o \
>         linux-tdep.o \
> +       lk-lists.o \
> +       lk-low.o \
>         lm32-tdep.o \
>         m32c-tdep.o \
>         m32r-linux-tdep.o \
> @@ -1103,6 +1105,8 @@ SFILES = \
>         jit.c \
>         language.c \
>         linespec.c \
> +       lk-lists.c \
> +       lk-low.c \
>         location.c \
>         m2-exp.y \
>         m2-lang.c \
> @@ -1350,6 +1354,8 @@ HFILES_NO_SRCDIR = \
>         linux-nat.h \
>         linux-record.h \
>         linux-tdep.h \
> +       lk-lists.h \
> +       lk-low.h \
>         location.h \
>         m2-lang.h \
>         m32r-tdep.h \
> @@ -2547,6 +2553,8 @@ ALLDEPFILES = \
>         linux-fork.c \
>         linux-record.c \
>         linux-tdep.c \
> +       lk-lists.c \
> +       lk-low.c \
>         lm32-tdep.c \
>         m32r-linux-nat.c \
>         m32r-linux-tdep.c \
> diff --git a/gdb/configure.tgt b/gdb/configure.tgt
> index cb909e7..8d87fea 100644
> --- a/gdb/configure.tgt
> +++ b/gdb/configure.tgt
> @@ -34,6 +34,10 @@ case $targ in
>      ;;
>  esac
>
> +# List of objectfiles for Linux kernel support.  To be included into *-linux*
> +# targets wich support Linux kernel debugging.
> +lk_target_obs="lk-lists.o lk-low.o"
> +
>  # map target info into gdb names.
>
>  case "${targ}" in
> @@ -479,7 +483,7 @@ powerpc*-*-*)
>  s390*-*-linux*)
>         # Target: S390 running Linux
>         gdb_target_obs="s390-linux-tdep.o solib-svr4.o linux-tdep.o \
> -                       linux-record.o"
> +                       linux-record.o ${lk_target_obs}"
>         build_gdbserver=yes
>         ;;
>
> diff --git a/gdb/gdbarch.c b/gdb/gdbarch.c
> index 87eafb2..5509a6c 100644
> --- a/gdb/gdbarch.c
> +++ b/gdb/gdbarch.c
> @@ -349,6 +349,7 @@ struct gdbarch
>    gdbarch_addressable_memory_unit_size_ftype *addressable_memory_unit_size;
>    char ** disassembler_options;
>    const disasm_options_t * valid_disassembler_options;
> +  gdbarch_lk_init_private_ftype *lk_init_private;
>  };
>
>  /* Create a new ``struct gdbarch'' based on information provided by
> @@ -1139,6 +1140,12 @@ gdbarch_dump (struct gdbarch *gdbarch, struct ui_file *file)
>                        "gdbarch_dump: iterate_over_regset_sections = <%s>\n",
>                        host_address_to_string (gdbarch->iterate_over_regset_sections));
>    fprintf_unfiltered (file,
> +                      "gdbarch_dump: gdbarch_lk_init_private_p() = %d\n",
> +                      gdbarch_lk_init_private_p (gdbarch));
> +  fprintf_unfiltered (file,
> +                      "gdbarch_dump: lk_init_private = <%s>\n",
> +                      host_address_to_string (gdbarch->lk_init_private));
> +  fprintf_unfiltered (file,
>                        "gdbarch_dump: long_bit = %s\n",
>                        plongest (gdbarch->long_bit));
>    fprintf_unfiltered (file,
> @@ -5008,6 +5015,30 @@ set_gdbarch_valid_disassembler_options (struct gdbarch *gdbarch,
>    gdbarch->valid_disassembler_options = valid_disassembler_options;
>  }
>
> +int
> +gdbarch_lk_init_private_p (struct gdbarch *gdbarch)
> +{
> +  gdb_assert (gdbarch != NULL);
> +  return gdbarch->lk_init_private != NULL;
> +}
> +
> +void
> +gdbarch_lk_init_private (struct gdbarch *gdbarch)
> +{
> +  gdb_assert (gdbarch != NULL);
> +  gdb_assert (gdbarch->lk_init_private != NULL);
> +  if (gdbarch_debug >= 2)
> +    fprintf_unfiltered (gdb_stdlog, "gdbarch_lk_init_private called\n");
> +  gdbarch->lk_init_private (gdbarch);
> +}
> +
> +void
> +set_gdbarch_lk_init_private (struct gdbarch *gdbarch,
> +                             gdbarch_lk_init_private_ftype lk_init_private)
> +{
> +  gdbarch->lk_init_private = lk_init_private;
> +}
> +
>
>  /* Keep a registry of per-architecture data-pointers required by GDB
>     modules.  */
> diff --git a/gdb/gdbarch.h b/gdb/gdbarch.h
> index 34f82a7..c03bf00 100644
> --- a/gdb/gdbarch.h
> +++ b/gdb/gdbarch.h
> @@ -1553,6 +1553,13 @@ extern void set_gdbarch_disassembler_options (struct gdbarch *gdbarch, char ** d
>
>  extern const disasm_options_t * gdbarch_valid_disassembler_options (struct gdbarch *gdbarch);
>  extern void set_gdbarch_valid_disassembler_options (struct gdbarch *gdbarch, const disasm_options_t * valid_disassembler_options);
> +/* Initiate architecture dependent private data for the linux-kernel target. */
> +
> +extern int gdbarch_lk_init_private_p (struct gdbarch *gdbarch);
> +
> +typedef void (gdbarch_lk_init_private_ftype) (struct gdbarch *gdbarch);
> +extern void gdbarch_lk_init_private (struct gdbarch *gdbarch);
> +extern void set_gdbarch_lk_init_private (struct gdbarch *gdbarch, gdbarch_lk_init_private_ftype *lk_init_private);
>
>  /* Definition for an unknown syscall, used basically in error-cases.  */
>  #define UNKNOWN_SYSCALL (-1)
> diff --git a/gdb/gdbarch.sh b/gdb/gdbarch.sh
> index 39b1f94..cad45d1 100755
> --- a/gdb/gdbarch.sh
> +++ b/gdb/gdbarch.sh
> @@ -1167,6 +1167,10 @@ m:int:addressable_memory_unit_size:void:::default_addressable_memory_unit_size::
>  v:char **:disassembler_options:::0:0::0:pstring_ptr (gdbarch->disassembler_options)
>  v:const disasm_options_t *:valid_disassembler_options:::0:0::0:host_address_to_string (gdbarch->valid_disassembler_options)
>
> +# Initialize architecture dependent private data for the linux-kernel
> +# target.
> +M:void:lk_init_private:void:
> +
>  EOF
>  }
>
> diff --git a/gdb/lk-lists.c b/gdb/lk-lists.c
> new file mode 100644
> index 0000000..55d11bd
> --- /dev/null
> +++ b/gdb/lk-lists.c
> @@ -0,0 +1,47 @@
> +/* Iterators for internal data structures of the Linux kernel.
> +
> +   Copyright (C) 2016 Free Software Foundation, Inc.
> +
> +   This file is part of GDB.
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +#include "defs.h"
> +
> +#include "inferior.h"
> +#include "lk-lists.h"
> +#include "lk-low.h"
> +
> +/* Returns next entry from struct list_head CURR while iterating field
> +   SNAME->FNAME.  */
> +
> +CORE_ADDR
> +lk_list_head_next (CORE_ADDR curr, const char *sname, const char *fname)
> +{
> +  CORE_ADDR next, next_prev;
> +
> +  /* We must always assume that the data we handle is corrupted.  Thus use
> +     curr->next->prev == curr as sanity check.  */
> +  next = lk_read_addr (curr + LK_OFFSET (list_head, next));
> +  next_prev = lk_read_addr (next + LK_OFFSET (list_head, prev));
> +
> +  if (!curr || curr != next_prev)
> +    {
> +      error (_("Memory corruption detected while iterating list_head at "\
> +              "0x%s belonging to list %s->%s."),
> +            phex (curr, lk_builtin_type_size (unsigned_long)) , sname, fname);
> +    }
> +
> +  return next;
> +}
> diff --git a/gdb/lk-lists.h b/gdb/lk-lists.h
> new file mode 100644
> index 0000000..f9c2a85
> --- /dev/null
> +++ b/gdb/lk-lists.h
> @@ -0,0 +1,56 @@
> +/* Iterators for internal data structures of the Linux kernel.
> +
> +   Copyright (C) 2016 Free Software Foundation, Inc.
> +
> +   This file is part of GDB.
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +#ifndef __LK_LISTS_H__
> +#define __LK_LISTS_H__
> +
> +extern CORE_ADDR lk_list_head_next (CORE_ADDR curr, const char *sname,
> +                                   const char *fname);
> +
> +/* Iterator over field SNAME->FNAME of type struct list_head starting at
> +   address START of type struct list_head.  This iterator is intended to be
> +   used for lists initiated with macro LIST_HEAD (include/linux/list.h) in
> +   the kernel, i.e. lists that START is a global variable of type struct
> +   list_head and _not_ of type struct SNAME as the rest of the list.  Thus
> +   START will not be iterated over but only be used to start/terminate the
> +   iteration.  */
> +
> +#define lk_list_for_each(next, start, sname, fname)            \
> +  for ((next) = lk_list_head_next ((start), #sname, #fname);   \
> +       (next) != (start);                                      \
> +       (next) = lk_list_head_next ((next), #sname, #fname))
> +
> +/* Iterator over struct SNAME linked together via field SNAME->FNAME of type
> +   struct list_head starting at address START of type struct SNAME.  In
> +   contrast to the iterator above, START is a "full" member of the list and
> +   thus will be iterated over.  */
> +
> +#define lk_list_for_each_container(cont, start, sname, fname)  \
> +  CORE_ADDR _next;                                             \
> +  bool _first_loop = true;                                     \
> +  for ((cont) = (start),                                       \
> +       _next = (start) + LK_OFFSET (sname, fname);             \
> +                                                               \
> +       (cont) != (start) || _first_loop;                       \
> +                                                               \
> +       _next = lk_list_head_next (_next, #sname, #fname),      \
> +       (cont) = LK_CONTAINER_OF (_next, sname, fname),         \
> +       _first_loop = false)
> +
> +#endif /* __LK_LISTS_H__ */
> diff --git a/gdb/lk-low.c b/gdb/lk-low.c
> new file mode 100644
> index 0000000..768f228
> --- /dev/null
> +++ b/gdb/lk-low.c
> @@ -0,0 +1,833 @@
> +/* Basic Linux kernel support, architecture independent.
> +
> +   Copyright (C) 2016 Free Software Foundation, Inc.
> +
> +   This file is part of GDB.
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +#include "defs.h"
> +
> +#include "block.h"
> +#include "exceptions.h"
> +#include "frame.h"
> +#include "gdbarch.h"
> +#include "gdbcore.h"
> +#include "gdbthread.h"
> +#include "gdbtypes.h"
> +#include "inferior.h"
> +#include "lk-lists.h"
> +#include "lk-low.h"
> +#include "objfiles.h"
> +#include "observer.h"
> +#include "solib.h"
> +#include "target.h"
> +#include "value.h"
> +
> +#include <algorithm>
> +
> +struct target_ops *linux_kernel_ops = NULL;
> +
> +/* Initialize a private data entry for an address, where NAME is the name
> +   of the symbol, i.e. variable name in Linux, ALIAS the name used to
> +   retrieve the entry from hashtab, and SILENT a flag to determine if
> +   errors should be ignored.
> +
> +   Returns a pointer to the new entry.  In case of an error, either returns
> +   NULL (SILENT = TRUE) or throws an error (SILENT = FALSE).  If SILENT = TRUE
> +   the caller is responsible to check for errors.
> +
> +   Do not use directly, use LK_DECLARE_* macros defined in lk-low.h instead.  */
> +
> +struct lk_private_data *
> +lk_init_addr (const char *name, const char *alias, int silent)
> +{
> +  struct lk_private_data *data;
> +  struct bound_minimal_symbol bmsym;
> +  void **new_slot;
> +  void *old_slot;
> +
> +  if ((old_slot = lk_find (alias)) != NULL)
> +    return (struct lk_private_data *) old_slot;
> +
> +  bmsym = lookup_minimal_symbol (name, NULL, NULL);
> +
> +  if (bmsym.minsym == NULL)
> +    {
> +      if (!silent)
> +       error (_("Could not find address %s.  Aborting."), alias);
> +      return NULL;
> +    }
> +
> +  data = XCNEW (struct lk_private_data);
> +  data->alias = alias;
> +  data->data.addr = BMSYMBOL_VALUE_ADDRESS (bmsym);
> +
> +  new_slot = lk_find_slot (alias);
> +  *new_slot = data;
> +
> +  return data;
> +}
> +
> +/* Same as lk_init_addr but for structs.  */
> +
> +struct lk_private_data *
> +lk_init_struct (const char *name, const char *alias, int silent)
> +{
> +  struct lk_private_data *data;
> +  const struct block *global;
> +  const struct symbol *sym;
> +  struct type *type;
> +  void **new_slot;
> +  void *old_slot;
> +
> +  if ((old_slot = lk_find (alias)) != NULL)
> +    return (struct lk_private_data *) old_slot;
> +
> +  global = block_global_block(get_selected_block (0));
> +  sym = lookup_symbol (name, global, STRUCT_DOMAIN, NULL).symbol;
> +
> +  if (sym != NULL)
> +    {
> +      type = SYMBOL_TYPE (sym);
> +      goto out;
> +    }
> +
> +  /*  Chek for "typedef struct { ... } name;"-like definitions.  */
> +  sym = lookup_symbol (name, global, VAR_DOMAIN, NULL).symbol;
> +  if (sym == NULL)
> +    goto error;
> +
> +  type = check_typedef (SYMBOL_TYPE (sym));
> +
> +  if (TYPE_CODE (type) == TYPE_CODE_STRUCT)
> +    goto out;
> +
> +error:
> +  if (!silent)
> +    error (_("Could not find %s.  Aborting."), alias);
> +
> +  return NULL;
> +
> +out:
> +  data = XCNEW (struct lk_private_data);
> +  data->alias = alias;
> +  data->data.type = type;
> +
> +  new_slot = lk_find_slot (alias);
> +  *new_slot = data;
> +
> +  return data;
> +}
> +
> +/* Nearly the same as lk_init_addr, with the difference that two names are
> +   needed, i.e. the struct name S_NAME containing the field with name
> +   F_NAME.  */
> +
> +struct lk_private_data *
> +lk_init_field (const char *s_name, const char *f_name,
> +              const char *s_alias, const char *f_alias,
> +              int silent)
> +{
> +  struct lk_private_data *data;
> +  struct lk_private_data *parent;
> +  struct field *first, *last, *field;
> +  void **new_slot;
> +  void *old_slot;
> +
> +  if ((old_slot = lk_find (f_alias)) != NULL)
> +    return (struct lk_private_data *) old_slot;
> +
> +  parent = lk_find (s_alias);
> +  if (parent == NULL)
> +    {
> +      parent = lk_init_struct (s_name, s_alias, silent);
> +
> +      /* Only SILENT == true needed, as otherwise lk_init_struct would throw
> +        an error.  */
> +      if (parent == NULL)
> +       return NULL;
> +    }
> +
> +  first = TYPE_FIELDS (parent->data.type);
> +  last = first + TYPE_NFIELDS (parent->data.type);
> +  for (field = first; field < last; field ++)
> +    {
> +      if (streq (field->name, f_name))
> +       break;
> +    }
> +
> +  if (field == last)
> +    {
> +      if (!silent)
> +       error (_("Could not find field %s->%s.  Aborting."), s_alias, f_name);
> +      return NULL;
> +    }
> +
> +  data = XCNEW (struct lk_private_data);
> +  data->alias = f_alias;
> +  data->data.field = field;
> +
> +  new_slot = lk_find_slot (f_alias);
> +  *new_slot = data;
> +
> +  return data;
> +}
> +
> +/* Map cpu number CPU to the original PTID from target beneath.  */
> +
> +static ptid_t
> +lk_cpu_to_old_ptid (const int cpu)
> +{
> +  struct lk_ptid_map *ptid_map;
> +
> +  for (ptid_map = LK_PRIVATE->old_ptid; ptid_map;
> +       ptid_map = ptid_map->next)
> +    {
> +      if (ptid_map->cpu == cpu)
> +       return ptid_map->old_ptid;
> +    }
> +
> +  error (_("Could not map CPU %d to original PTID.  Aborting."), cpu);
> +}
> +
> +/* Helper functions to read and return basic types at a given ADDRess.  */
> +
> +/* Read and return the integer value at address ADDR.  */
> +
> +int
> +lk_read_int (CORE_ADDR addr)
> +{
> +  size_t int_size = lk_builtin_type_size (int);
> +  enum bfd_endian endian = gdbarch_byte_order (current_inferior ()->gdbarch);
> +  return read_memory_integer (addr, int_size, endian);
> +}
> +
> +/* Read and return the unsigned integer value at address ADDR.  */
> +
> +unsigned int
> +lk_read_uint (CORE_ADDR addr)
> +{
> +  size_t uint_size = lk_builtin_type_size (unsigned_int);
> +  enum bfd_endian endian = gdbarch_byte_order (current_inferior ()->gdbarch);
> +  return read_memory_integer (addr, uint_size, endian);
> +}
> +
> +/* Read and return the long integer value at address ADDR.  */
> +
> +LONGEST
> +lk_read_long (CORE_ADDR addr)
> +{
> +  size_t long_size = lk_builtin_type_size (long);
> +  enum bfd_endian endian = gdbarch_byte_order (current_inferior ()->gdbarch);
> +  return read_memory_integer (addr, long_size, endian);
> +}
> +
> +/* Read and return the unsigned long integer value at address ADDR.  */
> +
> +ULONGEST
> +lk_read_ulong (CORE_ADDR addr)
> +{
> +  size_t ulong_size = lk_builtin_type_size (unsigned_long);
> +  enum bfd_endian endian = gdbarch_byte_order (current_inferior ()->gdbarch);
> +  return read_memory_unsigned_integer (addr, ulong_size, endian);
> +}
> +
> +/* Read and return the address value at address ADDR.  */
> +
> +CORE_ADDR
> +lk_read_addr (CORE_ADDR addr)
> +{
> +  return (CORE_ADDR) lk_read_ulong (addr);
> +}
> +
> +/* Reads a bitmap at a given ADDRess of size SIZE (in bits). Allocates and
> +   returns an array of ulongs.  The caller is responsible to free the array
> +   after it is no longer needed.  */
> +
> +ULONGEST *
> +lk_read_bitmap (CORE_ADDR addr, size_t size)
> +{
> +  ULONGEST *bitmap;
> +  size_t ulong_size, len;
> +
> +  ulong_size = lk_builtin_type_size (unsigned_long);
> +  len = LK_DIV_ROUND_UP (size, ulong_size * LK_BITS_PER_BYTE);
> +  bitmap = XNEWVEC (ULONGEST, len);
> +
> +  for (size_t i = 0; i < len; i++)
> +    bitmap[i] = lk_read_ulong (addr + i * ulong_size);
> +
> +  return bitmap;
> +}
> +
> +/* Return the next set bit in bitmap BITMAP of size SIZE (in bits)
> +   starting from bit (index) BIT.  Return SIZE when the end of the bitmap
> +   was reached.  To iterate over all set bits use macro
> +   LK_BITMAP_FOR_EACH_SET_BIT defined in lk-low.h.  */
> +
> +size_t
> +lk_bitmap_find_next_bit (ULONGEST *bitmap, size_t size, size_t bit)
> +{
> +  size_t ulong_size, bits_per_ulong, elt;
> +
> +  ulong_size = lk_builtin_type_size (unsigned_long);
> +  bits_per_ulong = ulong_size * LK_BITS_PER_BYTE;
> +  elt = bit / bits_per_ulong;
> +
> +  while (bit < size)
> +    {
> +      /* FIXME: Explain why using lsb0 bit order.  */
> +      if (bitmap[elt] & (1UL << (bit % bits_per_ulong)))
> +       return bit;
> +
> +      bit++;
> +      if (bit % bits_per_ulong == 0)
> +       elt++;
> +    }
> +
> +  return size;
> +}
> +
> +/* Returns the Hamming weight, i.e. number of set bits, of bitmap BITMAP
> +   with size SIZE (in bits).  */
> +
> +size_t
> +lk_bitmap_hweight (ULONGEST *bitmap, size_t size)
> +{
> +  size_t ulong_size, bit, bits_per_ulong, elt, retval;
> +
> +  ulong_size = lk_builtin_type_size (unsigned_long);
> +  bits_per_ulong = ulong_size * LK_BITS_PER_BYTE;
> +  elt = bit = 0;
> +  retval = 0;
> +
> +  while (bit < size)
> +    {
> +      if (bitmap[elt] & (1 << bit % bits_per_ulong))
> +       retval++;
> +
> +      bit++;
> +      if (bit % bits_per_ulong == 0)
> +       elt++;
> +    }
> +
> +  return retval;
> +}
> +
> +/* Provide the per_cpu_offset of cpu CPU.  See comment in lk-low.h for
> +   details.  */
> +
> +CORE_ADDR
> +lk_get_percpu_offset (unsigned int cpu)
> +{
> +  size_t ulong_size = lk_builtin_type_size (unsigned_long);
> +  CORE_ADDR percpu_elt;
> +
> +  /* Give the architecture a chance to overwrite default behaviour.  */
> +  if (LK_HOOK->get_percpu_offset)
> +      return LK_HOOK->get_percpu_offset (cpu);
> +
> +  percpu_elt = LK_ADDR (__per_cpu_offset) + (ulong_size * cpu);
> +  return lk_read_addr (percpu_elt);
> +}
> +
> +
> +/* Test if a given task TASK is running.  See comment in lk-low.h for
> +   details.  */
> +
> +unsigned int
> +lk_task_running (CORE_ADDR task)
> +{
> +  ULONGEST *cpu_online_mask;
> +  size_t size;
> +  unsigned int cpu;
> +  struct cleanup *old_chain;
> +
> +  size = LK_BITMAP_SIZE (cpumask);
> +  cpu_online_mask = lk_read_bitmap (LK_ADDR (cpu_online_mask), size);
> +  old_chain = make_cleanup (xfree, cpu_online_mask);
> +
> +  LK_BITMAP_FOR_EACH_SET_BIT (cpu_online_mask, size, cpu)
> +    {
> +      CORE_ADDR rq;
> +      CORE_ADDR curr;
> +
> +      rq = LK_ADDR (runqueues) + lk_get_percpu_offset (cpu);
> +      curr = lk_read_addr (rq + LK_OFFSET (rq, curr));
> +
> +      if (curr == task)
> +       break;
> +    }
> +
> +  if (cpu == size)
> +    cpu = LK_CPU_INVAL;
> +
> +  do_cleanups (old_chain);
> +  return cpu;
> +}
> +
> +/* Update running tasks with information from struct rq->curr. */
> +
> +static void
> +lk_update_running_tasks ()
> +{
> +  ULONGEST *cpu_online_mask;
> +  size_t size;
> +  unsigned int cpu;
> +  struct cleanup *old_chain;
> +
> +  size = LK_BITMAP_SIZE (cpumask);
> +  cpu_online_mask = lk_read_bitmap (LK_ADDR (cpu_online_mask), size);
> +  old_chain = make_cleanup (xfree, cpu_online_mask);
> +
> +  LK_BITMAP_FOR_EACH_SET_BIT (cpu_online_mask, size, cpu)
> +    {
> +      struct thread_info *tp;
> +      CORE_ADDR rq, curr;
> +      LONGEST pid, inf_pid;
> +      ptid_t new_ptid, old_ptid;
> +
> +      rq = LK_ADDR (runqueues) + lk_get_percpu_offset (cpu);
> +      curr = lk_read_addr (rq + LK_OFFSET (rq, curr));
> +      pid = lk_read_int (curr + LK_OFFSET (task_struct, pid));
> +      inf_pid = current_inferior ()->pid;
> +
> +      new_ptid = ptid_build (inf_pid, pid, curr);
> +      old_ptid = lk_cpu_to_old_ptid (cpu); /* FIXME not suitable for
> +                                             running targets? */
> +
> +      tp = find_thread_ptid (old_ptid);
> +      if (tp && tp->state != THREAD_EXITED)
> +       thread_change_ptid (old_ptid, new_ptid);
> +    }
> +  do_cleanups (old_chain);
> +}
> +
> +/* Update sleeping tasks by walking the task_structs starting from
> +   init_task.  */
> +
> +static void
> +lk_update_sleeping_tasks ()
> +{
> +  CORE_ADDR init_task, task, thread;
> +  int inf_pid;
> +
> +  inf_pid = current_inferior ()->pid;
> +  init_task = LK_ADDR (init_task);
> +
> +  lk_list_for_each_container (task, init_task, task_struct, tasks)
> +    {
> +      lk_list_for_each_container (thread, task, task_struct, thread_group)
> +       {
> +         int pid;
> +         ptid_t ptid;
> +         struct thread_info *tp;
> +
> +         pid = lk_read_int (thread + LK_OFFSET (task_struct, pid));
> +         ptid = ptid_build (inf_pid, pid, thread);
> +
> +         tp = find_thread_ptid (ptid);
> +         if (tp == NULL || tp->state == THREAD_EXITED)
> +           add_thread (ptid);
> +       }
> +    }
> +}
> +
> +/* Function for targets to_update_thread_list hook.  */
> +
> +static void
> +lk_update_thread_list (struct target_ops *target)
> +{
> +  prune_threads ();
> +  lk_update_running_tasks ();
> +  lk_update_sleeping_tasks ();
> +}
> +
> +/* Function for targets to_fetch_registers hook.  */
> +
> +static void
> +lk_fetch_registers (struct target_ops *target,
> +                   struct regcache *regcache, int regnum)
> +{
> +  CORE_ADDR task;
> +  unsigned int cpu;
> +
> +  task = (CORE_ADDR) ptid_get_tid (regcache_get_ptid (regcache));
> +  cpu = lk_task_running (task);
> +
> +  /* Let the target beneath fetch registers of running tasks.  */
> +  if (cpu != LK_CPU_INVAL)
> +    {
> +      struct cleanup *old_inferior_ptid;
> +
> +      old_inferior_ptid = save_inferior_ptid ();
> +      inferior_ptid = lk_cpu_to_old_ptid (cpu);
> +      linux_kernel_ops->beneath->to_fetch_registers (target, regcache, regnum);
> +      do_cleanups (old_inferior_ptid);
> +    }
> +  else
> +    {
> +      struct gdbarch *gdbarch;
> +      unsigned int i;
> +
> +      LK_HOOK->get_registers (task, target, regcache, regnum);
> +
> +      /* Mark all registers not found as unavailable.  */
> +      gdbarch = get_regcache_arch (regcache);
> +      for (i = 0; i < gdbarch_num_regs (gdbarch); i++)
> +       {
> +         if (regcache_register_status (regcache, i) == REG_UNKNOWN)
> +           regcache_raw_supply (regcache, i, NULL);
> +       }
> +    }
> +}
> +
> +/* Function for targets to_pid_to_str hook.  Marks running tasks with an
> +   asterisk "*".  */
> +
> +static char *
> +lk_pid_to_str (struct target_ops *target, ptid_t ptid)
> +{
> +  static char buf[64];
> +  long pid;
> +  CORE_ADDR task;
> +
> +  pid = ptid_get_lwp (ptid);
> +  task = (CORE_ADDR) ptid_get_tid (ptid);
> +
> +  xsnprintf (buf, sizeof (buf), "PID: %5li%s, 0x%s",
> +            pid, ((lk_task_running (task) != LK_CPU_INVAL) ? "*" : ""),
> +            phex (task, lk_builtin_type_size (unsigned_long)));
> +
> +  return buf;
> +}
> +
> +/* Function for targets to_thread_name hook.  */
> +
> +static const char *
> +lk_thread_name (struct target_ops *target, struct thread_info *ti)
> +{
> +  static char buf[LK_TASK_COMM_LEN + 1];
> +  char tmp[LK_TASK_COMM_LEN + 1];
> +  CORE_ADDR task, comm;
> +  size_t size;
> +
> +  size = std::min ((unsigned int) LK_TASK_COMM_LEN,
> +                  LK_ARRAY_LEN(LK_FIELD (task_struct, comm)));
> +
> +  task = (CORE_ADDR) ptid_get_tid (ti->ptid);
> +  comm = task + LK_OFFSET (task_struct, comm);
> +  read_memory (comm, (gdb_byte *) tmp, size);
> +
> +  xsnprintf (buf, sizeof (buf), "%-16s", tmp);
> +
> +  return buf;
> +}
> +
> +/* Functions to initialize and free target_ops and its private data.  As well
> +   as functions for targets to_open/close/detach hooks.  */
> +
> +/* Check if OBFFILE is a Linux kernel.  */
> +
> +static int
> +lk_is_linux_kernel (struct objfile *objfile)
> +{
> +  int ok = 0;
> +
> +  if (objfile == NULL || !(objfile->flags & OBJF_MAINLINE))
> +    return 0;
> +
> +  ok += lookup_minimal_symbol ("linux_banner", NULL, objfile).minsym != NULL;
> +  ok += lookup_minimal_symbol ("_stext", NULL, objfile).minsym != NULL;
> +  ok += lookup_minimal_symbol ("_etext", NULL, objfile).minsym != NULL;
> +
> +  return (ok > 2);
> +}
> +
> +/* Initialize struct lk_private.  */
> +
> +static void
> +lk_init_private ()
> +{
> +  linux_kernel_ops->to_data = XCNEW (struct lk_private);
> +  LK_PRIVATE->hooks = XCNEW (struct lk_private_hooks);
> +  LK_PRIVATE->data = htab_create_alloc (31, (htab_hash) lk_hash_private_data,
> +                                       (htab_eq) lk_private_data_eq, NULL,
> +                                       xcalloc, xfree);
> +}
> +
> +/* Initialize architecture independent private data.  Must be called
> +   _after_ symbol tables were initialized.  */
> +
> +static void
> +lk_init_private_data ()
> +{
> +  if (LK_PRIVATE->data != NULL)
> +    htab_empty (LK_PRIVATE->data);
> +
> +  LK_DECLARE_FIELD (task_struct, tasks);
> +  LK_DECLARE_FIELD (task_struct, pid);
> +  LK_DECLARE_FIELD (task_struct, tgid);
> +  LK_DECLARE_FIELD (task_struct, thread_group);
> +  LK_DECLARE_FIELD (task_struct, comm);
> +  LK_DECLARE_FIELD (task_struct, thread);
> +
> +  LK_DECLARE_FIELD (list_head, next);
> +  LK_DECLARE_FIELD (list_head, prev);
> +
> +  LK_DECLARE_FIELD (rq, curr);
> +
> +  LK_DECLARE_FIELD (cpumask, bits);
> +
> +  LK_DECLARE_ADDR (init_task);
> +  LK_DECLARE_ADDR (runqueues);
> +  LK_DECLARE_ADDR (__per_cpu_offset);
> +  LK_DECLARE_ADDR (init_mm);
> +
> +  LK_DECLARE_ADDR_ALIAS (__cpu_online_mask, cpu_online_mask);  /* linux 4.5+ */
> +  LK_DECLARE_ADDR_ALIAS (cpu_online_bits, cpu_online_mask);    /* linux -4.4 */
> +  if (LK_ADDR (cpu_online_mask) == -1)
> +    error (_("Could not find address cpu_online_mask.  Aborting."));
> +}
> +
> +/* Frees the cpu to old ptid map.  */
> +
> +static void
> +lk_free_ptid_map ()
> +{
> +  while (LK_PRIVATE->old_ptid)
> +    {
> +      struct lk_ptid_map *tmp;
> +
> +      tmp = LK_PRIVATE->old_ptid;
> +      LK_PRIVATE->old_ptid = tmp->next;
> +      XDELETE (tmp);
> +    }
> +}
> +
> +/* Initialize the cpu to old ptid map.  Prefer the arch dependent
> +   map_running_task_to_cpu hook if provided, else assume that the PID used
> +   by target beneath is the same as in task_struct PID task_struct.  See
> +   comment on lk_ptid_map in lk-low.h for details.  */
> +
> +static void
> +lk_init_ptid_map ()
> +{
> +  struct thread_info *ti;
> +  ULONGEST *cpu_online_mask;
> +  size_t size;
> +  unsigned int cpu;
> +  struct cleanup *old_chain;
> +
> +  if (LK_PRIVATE->old_ptid != NULL)
> +    lk_free_ptid_map ();
> +
> +  size = LK_BITMAP_SIZE (cpumask);
> +  cpu_online_mask = lk_read_bitmap (LK_ADDR (cpu_online_mask), size);
> +  old_chain = make_cleanup (xfree, cpu_online_mask);
> +
> +  ALL_THREADS (ti)
> +    {
> +      struct lk_ptid_map *ptid_map = XCNEW (struct lk_ptid_map);
> +      CORE_ADDR rq, curr;
> +      int pid;
> +
> +      /* Give the architecture a chance to overwrite default behaviour.  */
> +      if (LK_HOOK->map_running_task_to_cpu)
> +       {
> +         ptid_map->cpu = LK_HOOK->map_running_task_to_cpu (ti);
> +       }
> +      else
> +       {
> +         LK_BITMAP_FOR_EACH_SET_BIT (cpu_online_mask, size, cpu)
> +           {
> +             rq = LK_ADDR (runqueues) + lk_get_percpu_offset (cpu);
> +             curr = lk_read_addr (rq + LK_OFFSET (rq, curr));
> +             pid = lk_read_int (curr + LK_OFFSET (task_struct, pid));
> +
> +             if (pid == ptid_get_lwp (ti->ptid))
> +               {
> +                 ptid_map->cpu = cpu;
> +                 break;
> +               }
> +           }
> +         if (cpu == size)
> +           error (_("Could not map thread with pid %d, lwp %lu to a cpu."),
> +                  ti->ptid.pid, ti->ptid.lwp);
> +       }
> +      ptid_map->old_ptid = ti->ptid;
> +      ptid_map->next = LK_PRIVATE->old_ptid;
> +      LK_PRIVATE->old_ptid = ptid_map;
> +    }
> +
> +  do_cleanups (old_chain);
> +}
> +
> +/* Initializes all private data and pushes the linux kernel target, if not
> +   already done.  */
> +
> +static void
> +lk_try_push_target ()
> +{
> +  struct gdbarch *gdbarch;
> +
> +  gdbarch = current_inferior ()->gdbarch;
> +  if (!(gdbarch && gdbarch_lk_init_private_p (gdbarch)))
> +    error (_("Linux kernel debugging not supported on %s."),
> +          gdbarch_bfd_arch_info (gdbarch)->printable_name);
> +
> +  lk_init_private ();
> +  lk_init_private_data ();
> +  gdbarch_lk_init_private (gdbarch);
> +  /* Check for required arch hooks.  */
> +  gdb_assert (LK_HOOK->get_registers);
> +
> +  lk_init_ptid_map ();
> +  lk_update_thread_list (linux_kernel_ops);
> +
> +  if (!target_is_pushed (linux_kernel_ops))
> +    push_target (linux_kernel_ops);
> +}
> +
> +/* Function for targets to_open hook.  */
> +
> +static void
> +lk_open (const char *args, int from_tty)
> +{
> +  struct objfile *objfile;
> +
> +  if (target_is_pushed (linux_kernel_ops))
> +    {
> +      printf_unfiltered (_("Linux kernel target already pushed.  Aborting\n"));
> +      return;
> +    }
> +
> +  for (objfile = current_program_space->objfiles; objfile;
> +       objfile = objfile->next)
> +    {
> +      if (lk_is_linux_kernel (objfile)
> +         && ptid_get_pid (inferior_ptid) != 0)
> +       {
> +         lk_try_push_target ();
> +         return;
> +       }
> +    }
> +  printf_unfiltered (_("Could not find a valid Linux kernel object file.  "
> +                      "Aborting.\n"));
> +}
> +
> +/* Function for targets to_close hook.  Deletes all private data.  */
> +
> +static void
> +lk_close (struct target_ops *ops)
> +{
> +  htab_delete (LK_PRIVATE->data);
> +  lk_free_ptid_map ();
> +  XDELETE (LK_PRIVATE->hooks);
> +
> +  XDELETE (LK_PRIVATE);
> +  linux_kernel_ops->to_data = NULL;
> +}
> +
> +/* Function for targets to_detach hook.  */
> +
> +static void
> +lk_detach (struct target_ops *t, const char *args, int from_tty)
> +{
> +  struct target_ops *beneath = linux_kernel_ops->beneath;
> +
> +  unpush_target (linux_kernel_ops);
> +  reinit_frame_cache ();
> +  if (from_tty)
> +    printf_filtered (_("Linux kernel target detached.\n"));
> +
> +  beneath->to_detach (beneath, args, from_tty);
> +}
> +
> +/* Function for new objfile observer.  */
> +
> +static void
> +lk_observer_new_objfile (struct objfile *objfile)
> +{
> +  if (lk_is_linux_kernel (objfile)
> +      && ptid_get_pid (inferior_ptid) != 0)
> +    lk_try_push_target ();
> +}
> +
> +/* Function for inferior created observer.  */
> +
> +static void
> +lk_observer_inferior_created (struct target_ops *ops, int from_tty)
> +{
> +  struct objfile *objfile;
> +
> +  if (ptid_get_pid (inferior_ptid) == 0)
> +    return;
> +
> +  for (objfile = current_inferior ()->pspace->objfiles; objfile;
> +       objfile = objfile->next)
> +    {
> +      if (lk_is_linux_kernel (objfile))
> +       {
> +         lk_try_push_target ();
> +         return;
> +       }
> +    }
> +}
> +
> +/* Initialize linux kernel target.  */
> +
> +static void
> +init_linux_kernel_ops (void)
> +{
> +  struct target_ops *t;
> +
> +  if (linux_kernel_ops != NULL)
> +    return;
> +
> +  t = XCNEW (struct target_ops);
> +  t->to_shortname = "linux-kernel";
> +  t->to_longname = "linux kernel support";
> +  t->to_doc = "Adds support to debug the Linux kernel";
> +
> +  /* set t->to_data = struct lk_private in lk_init_private.  */
> +
> +  t->to_open = lk_open;
> +  t->to_close = lk_close;
> +  t->to_detach = lk_detach;
> +  t->to_fetch_registers = lk_fetch_registers;
> +  t->to_update_thread_list = lk_update_thread_list;
> +  t->to_pid_to_str = lk_pid_to_str;
> +  t->to_thread_name = lk_thread_name;
> +
> +  t->to_stratum = thread_stratum;
> +  t->to_magic = OPS_MAGIC;
> +
> +  linux_kernel_ops = t;
> +
> +  add_target (t);
> +}
> +
> +/* Provide a prototype to silence -Wmissing-prototypes.  */
> +extern initialize_file_ftype _initialize_linux_kernel;
> +
> +void
> +_initialize_linux_kernel (void)
> +{
> +  init_linux_kernel_ops ();
> +
> +  observer_attach_new_objfile (lk_observer_new_objfile);
> +  observer_attach_inferior_created (lk_observer_inferior_created);
> +}
> diff --git a/gdb/lk-low.h b/gdb/lk-low.h
> new file mode 100644
> index 0000000..292ef97
> --- /dev/null
> +++ b/gdb/lk-low.h
> @@ -0,0 +1,310 @@
> +/* Basic Linux kernel support, architecture independent.
> +
> +   Copyright (C) 2016 Free Software Foundation, Inc.
> +
> +   This file is part of GDB.
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +#ifndef __LK_LOW_H__
> +#define __LK_LOW_H__
> +
> +#include "target.h"
> +
> +extern struct target_ops *linux_kernel_ops;
> +
> +/* Copy constants defined in Linux kernel.  */
> +#define LK_TASK_COMM_LEN 16
> +#define LK_BITS_PER_BYTE 8
> +
> +/* Definitions used in linux kernel target.  */
> +#define LK_CPU_INVAL -1U
> +
> +/* Private data structs for this target.  */
> +/* Forward declarations.  */
> +struct lk_private_hooks;
> +struct lk_ptid_map;
> +
> +/* Short hand access to private data.  */
> +#define LK_PRIVATE ((struct lk_private *) linux_kernel_ops->to_data)
> +#define LK_HOOK (LK_PRIVATE->hooks)
> +
> +struct lk_private
> +{
> +  /* Hashtab for needed addresses, structs and fields.  */
> +  htab_t data;
> +
> +  /* Linked list to map between cpu number and original ptid from target
> +     beneath.  */
> +  struct lk_ptid_map *old_ptid;
> +
> +  /* Hooks for architecture dependent functions.  */
> +  struct lk_private_hooks *hooks;
> +};
> +
> +/* We use the following convention for PTIDs:
> +
> +   ptid->pid = inferiors PID
> +   ptid->lwp = PID from task_stuct
> +   ptid->tid = address of task_struct
> +
> +   The task_structs address as TID has two reasons.  First, we need it quite
> +   often and there is no other reasonable way to pass it down.  Second, it
> +   helps us to distinguish swapper tasks as they all have PID = 0.
> +
> +   Furthermore we cannot rely on the target beneath to use the same PID as the
> +   task_struct. Thus we need a mapping between our PTID and the PTID of the
> +   target beneath. Otherwise it is impossible to pass jobs, e.g. fetching
> +   registers of running tasks, to the target beneath.  */
> +
> +/* Private data struct to map between our and the target beneath PTID.  */
> +
> +struct lk_ptid_map
> +{
> +  struct lk_ptid_map *next;
> +  unsigned int cpu;
> +  ptid_t old_ptid;
> +};
> +
> +/* Private data struct to be stored in hashtab.  */
> +
> +struct lk_private_data
> +{
> +  const char *alias;
> +
> +  union
> +  {
> +    CORE_ADDR addr;
> +    struct type *type;
> +    struct field *field;
> +  } data;
> +};
> +
> +/* Wrapper for htab_hash_string to work with our private data.  */
> +
> +static inline hashval_t
> +lk_hash_private_data (const struct lk_private_data *entry)
> +{
> +  return htab_hash_string (entry->alias);
> +}
> +
> +/* Function for htab_eq to work with our private data.  */
> +
> +static inline int
> +lk_private_data_eq (const struct lk_private_data *entry,
> +                   const struct lk_private_data *element)
> +{
> +  return streq (entry->alias, element->alias);
> +}
> +
> +/* Wrapper for htab_find_slot to work with our private data.  Do not use
> +   directly, use the macros below instead.  */
> +
> +static inline void **
> +lk_find_slot (const char *alias)
> +{
> +  const struct lk_private_data dummy = { alias };
> +  return htab_find_slot (LK_PRIVATE->data, &dummy, INSERT);
> +}
> +
> +/* Wrapper for htab_find to work with our private data.  Do not use
> +   directly, use the macros below instead.  */
> +
> +static inline struct lk_private_data *
> +lk_find (const char *alias)
> +{
> +  const struct lk_private_data dummy = { alias };
> +  return (struct lk_private_data *) htab_find (LK_PRIVATE->data, &dummy);
> +}
> +
> +/* Functions to initialize private data.  Do not use directly, use the
> +   macros below instead.  */
> +
> +extern struct lk_private_data *lk_init_addr (const char *name,
> +                                            const char *alias, int silent);
> +extern struct lk_private_data *lk_init_struct (const char *name,
> +                                              const char *alias, int silent);
> +extern struct lk_private_data *lk_init_field (const char *s_name,
> +                                             const char *f_name,
> +                                             const char *s_alias,
> +                                             const char *f_alias, int silent);
> +
> +/* The names we use to store our private data in the hashtab.  */
> +
> +#define LK_STRUCT_ALIAS(s_name) ("struct " #s_name)
> +#define LK_FIELD_ALIAS(s_name, f_name) (#s_name " " #f_name)
> +
> +/* Macros to initiate addresses and fields, where (S_/F_)NAME is the variables
> +   name as used in Linux.  LK_DECLARE_FIELD also initializes the corresponding
> +   struct entry.  Throws an error, if no symbol with the given name is found.
> + */
> +
> +#define LK_DECLARE_ADDR(name) \
> +  lk_init_addr (#name, #name, 0)
> +#define LK_DECLARE_FIELD(s_name, f_name) \
> +  lk_init_field (#s_name, #f_name, LK_STRUCT_ALIAS (s_name), \
> +                LK_FIELD_ALIAS (s_name, f_name), 0)
> +
> +/* Same as LK_DECLARE_*, but returns NULL instead of throwing an error if no
> +   symbol was found.  The caller is responsible to check for possible errors.
> + */
> +
> +#define LK_DECLARE_ADDR_SILENT(name) \
> +  lk_init_addr (#name, #name, 1)
> +#define LK_DECLARE_FIELD_SILENT(s_name, f_name) \
> +  lk_init_field (#s_name, #f_name, LK_STRUCT_ALIAS (s_name), \
> +                LK_FIELD_ALIAS (s_name, f_name), 1)
> +
> +/* Same as LK_DECLARE_*_SILENT, but allows you to give an ALIAS name.  If used
> +   for a struct, the struct has to be declared explicitly _before_ any of its
> +   fields.  They are ment to be used, when a variable in the kernel was simply
> +   renamed (at least from our point of view).  The caller is responsible to
> +   check for possible errors.  */
> +
> +#define LK_DECLARE_ADDR_ALIAS(name, alias) \
> +  lk_init_addr (#name, #alias, 1)
> +#define LK_DECLARE_STRUCT_ALIAS(s_name, alias) \
> +  lk_init_struct (#s_name, LK_STRUCT_ALIAS (alias), 1)
> +#define LK_DECLARE_FIELD_ALIAS(s_alias, f_name, f_alias) \
> +  lk_init_field (NULL, #f_name, LK_STRUCT_ALIAS (s_alias), \
> +                LK_FIELD_ALIAS (s_alias, f_alias), 1)
> +
> +/* Macros to retrieve private data from hashtab. Returns NULL (-1) if no entry
> +   with the given ALIAS exists. The caller only needs to check for possible
> +   errors if not done so at initialization.  */
> +
> +#define LK_ADDR(alias) \
> +  (lk_find (#alias) ? (lk_find (#alias))->data.addr : -1)
> +#define LK_STRUCT(alias) \
> +  (lk_find (LK_STRUCT_ALIAS (alias)) \
> +   ? (lk_find (LK_STRUCT_ALIAS (alias)))->data.type \
> +   : NULL)
> +#define LK_FIELD(s_alias, f_alias) \
> +  (lk_find (LK_FIELD_ALIAS (s_alias, f_alias)) \
> +   ? (lk_find (LK_FIELD_ALIAS (s_alias, f_alias)))->data.field \
> +   : NULL)
> +
> +
> +/* Definitions for architecture dependent hooks.  */
> +/* Hook to read registers from the target and supply their content
> +   to the regcache.  */
> +typedef void (*lk_hook_get_registers) (CORE_ADDR task,
> +                                      struct target_ops *target,
> +                                      struct regcache *regcache,
> +                                      int regnum);
> +
> +/* Hook to return the per_cpu_offset of cpu CPU.  Only architectures that
> +   do not use the __per_cpu_offset array to determin the offset have to
> +   supply this hook.  */
> +typedef CORE_ADDR (*lk_hook_get_percpu_offset) (unsigned int cpu);
> +
> +/* Hook to map a running task to a logical CPU.  Required if the target
> +   beneath uses a different PID as struct rq.  */
> +typedef unsigned int (*lk_hook_map_running_task_to_cpu) (struct thread_info *ti);
> +
> +struct lk_private_hooks
> +{
> +  /* required */
> +  lk_hook_get_registers get_registers;
> +
> +  /* optional, required if __per_cpu_offset array is not used to determine
> +     offset.  */
> +  lk_hook_get_percpu_offset get_percpu_offset;
> +
> +  /* optional, required if the target beneath uses a different PID as struct
> +     rq.  */
> +  lk_hook_map_running_task_to_cpu map_running_task_to_cpu;
> +};
> +
> +/* Helper functions to read and return a value at a given ADDRess.  */
> +extern int lk_read_int (CORE_ADDR addr);
> +extern unsigned int lk_read_uint (CORE_ADDR addr);
> +extern LONGEST lk_read_long (CORE_ADDR addr);
> +extern ULONGEST lk_read_ulong (CORE_ADDR addr);
> +extern CORE_ADDR lk_read_addr (CORE_ADDR addr);
> +
> +/* Reads a bitmap at a given ADDRess of size SIZE (in bits). Allocates and
> +   returns an array of ulongs.  The caller is responsible to free the array
> +   after it is no longer needed.  */
> +extern ULONGEST *lk_read_bitmap (CORE_ADDR addr, size_t size);
> +
> +/* Walks the bitmap BITMAP of size SIZE from bit (index) BIT.
> +   Returns the index of the next set bit or SIZE, when the end of the bitmap
> +   was reached.  To iterate over all set bits use macro
> +   LK_BITMAP_FOR_EACH_SET_BIT defined below.  */
> +extern size_t lk_bitmap_find_next_bit (ULONGEST *bitmap, size_t bit,
> +                                      size_t size);
> +#define LK_BITMAP_FOR_EACH_SET_BIT(bitmap, size, bit)                  \
> +  for ((bit) = lk_bitmap_find_next_bit ((bitmap), (size), 0);          \
> +       (bit) < (size);                                                 \
> +       (bit) = lk_bitmap_find_next_bit ((bitmap), (size), (bit) + 1))
> +
> +/* Returns the size of BITMAP in bits.  */
> +#define LK_BITMAP_SIZE(bitmap) \
> +  (FIELD_SIZE (LK_FIELD (bitmap, bits)) * LK_BITS_PER_BYTE)
> +
> +/* Returns the Hamming weight, i.e. number of set bits, of bitmap BITMAP with
> +   size SIZE (in bits).  */
> +extern size_t lk_bitmap_hweight (ULONGEST *bitmap, size_t size);
> +
> +
> +/* Short hand access to current gdbarchs builtin types and their
> +   size (in byte).  For TYPE replace spaces " " by underscore "_", e.g.
> +   "unsigned int" => "unsigned_int".  */
> +#define lk_builtin_type(type)                                  \
> +  (builtin_type (current_inferior ()->gdbarch)->builtin_##type)
> +#define lk_builtin_type_size(type)             \
> +  (lk_builtin_type (type)->length)
> +
> +/* If field FIELD is an array returns its length (in #elements).  */
> +#define LK_ARRAY_LEN(field)                    \
> +  (FIELD_SIZE (field) / FIELD_TARGET_SIZE (field))
> +
> +/* Short hand access to the offset of field F_NAME in struct S_NAME.  */
> +#define LK_OFFSET(s_name, f_name)              \
> +  (FIELD_OFFSET (LK_FIELD (s_name, f_name)))
> +
> +/* Returns the container of field FNAME of struct SNAME located at address
> +   ADDR.  */
> +#define LK_CONTAINER_OF(addr, sname, fname)            \
> +  ((addr) - LK_OFFSET (sname, fname))
> +
> +/* Divides numinator N by demoniator D and rounds up the result.  */
> +#define LK_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
> +
> +
> +/* Additional access macros to fields in the style of gdbtypes.h */
> +/* Returns the size of field FIELD (in bytes). If FIELD is an array returns
> +   the size of the whole array.  */
> +#define FIELD_SIZE(field)                      \
> +  TYPE_LENGTH (check_typedef (FIELD_TYPE (*field)))
> +
> +/* Returns the size of the target type of field FIELD (in bytes).  If FIELD is
> +   an array returns the size of its elements.  */
> +#define FIELD_TARGET_SIZE(field)               \
> +  TYPE_LENGTH (check_typedef (TYPE_TARGET_TYPE (FIELD_TYPE (*field))))
> +
> +/* Returns the offset of field FIELD (in bytes).  */
> +#define FIELD_OFFSET(field)                    \
> +  (FIELD_BITPOS (*field) / TARGET_CHAR_BIT)
> +
> +/* Provides the per_cpu_offset of cpu CPU.  If the architecture
> +   provides a get_percpu_offset hook, the call is passed to it.  Otherwise
> +   returns the __per_cpu_offset[CPU] element.  */
> +extern CORE_ADDR lk_get_percpu_offset (unsigned int cpu);
> +
> +/* Tests if a given task TASK is running. Returns either the cpu-id
> +   if running or LK_CPU_INVAL if not.  */
> +extern unsigned int lk_task_running (CORE_ADDR task);
> +#endif /* __LK_LOW_H__ */
> --
> 2.8.4
>
  
Andreas Arnez April 24, 2017, 3:24 p.m. UTC | #3
On Thu, Apr 20 2017, Omair Javaid wrote:

> Hi Philipp and Andreas,
>
> I have some further comments on this patch specifically about copying
> task_struct->pid into ptid->lwp and using task_struct address as tid.
>
> I see that we are overriding lwp, tid which any target beneath might
> be using differently.
>
> So suggestion about storing task_struct->pid or task_struct address is
> to use private_thread_info in binutils-gdb/gdb/gdbthread.h for this
> information.

The current version of the patch series is mainly focused on dump
targets.  Remote targets require some additional changes.  We've
discussed the use of private_thread_info before, and the last I've seen
is that it is not suitable either, because remote.c uses it already:

  https://sourceware.org/ml/gdb-patches/2017-02/msg00543.html

In my view, the private_thread_info field really is a hack, and we are
now facing its limits.  It provides some space for a single thread layer
to store information into, but not for multiple such layers.  In the
case of the Linux kernel we at least have two different thread layers:
the CPU layer (each "thread" is a CPU), and the kernel task layer on top
of that.

I think we need to allow a target to maintain its *own* thread list.
The CPU "thread list" would be maintained by the target beneath
(remote/dump), and the kernel task list would be maintained by the LK
target.  The ptid namespaces could be completely separate.

> I also have reservation about use of old_ptid naming in struct
> lk_private and struct lk_ptid_map.
>
> old_ptid naming is a little confusing kindly choose a distinguishable
> name for old_ptid varibles in both lk_private and lk_ptid_map.
>
> Further Here's an implementation of bitmap_weight function from linux
> kernel. Kindly see if your implementation can be improved and moved to
> a generic area in gdb.
>
>  10 int __bitmap_weight(const unsigned long *bitmap, int bits)
>  11 {
>  12         int k, w = 0, lim = bits/BITS_PER_LONG;
>  13
>  14         for (k = 0; k < lim; k++)
>  15                 w += hweight_long(bitmap[k]);
>  16
>  17         if (bits % BITS_PER_LONG)
>  18                 w += hweight_long(bitmap[k] & BITMAP_LAST_WORD_MASK(bits));
>  19
>  20         return w;
>  21 }

The __bitmap_weight function is specific to Linux, so I'm not sure we
want to move it to a generic area.  For big-endian targets the function
depends on the width of Linux' "unsigned long" type, because
BITMAP_LAST_WORD_MASK builds a mask for the *least significant* bits
instead of the *lowest-addressed* ones.

It's probably true that the performance of lk_bitmap_hweight could be
improved.  For instance, with some care a function like popcount_hwi()
in GCC's hwint.h could be exploited, even if the target's word width and
byte order may not match the GDB client's.  This would not make the
function simpler, though.

--
Andreas
  
Yao Qi May 2, 2017, 11:14 a.m. UTC | #4
Philipp Rudo <prudo@linux.vnet.ibm.com> writes:

Hi Philipp,

> +/* Initialize architecture independent private data.  Must be called
> +   _after_ symbol tables were initialized.  */
> +
> +static void
> +lk_init_private_data ()
> +{
> +  if (LK_PRIVATE->data != NULL)
> +    htab_empty (LK_PRIVATE->data);
> +
> +  LK_DECLARE_FIELD (task_struct, tasks);
> +  LK_DECLARE_FIELD (task_struct, pid);
> +  LK_DECLARE_FIELD (task_struct, tgid);
> +  LK_DECLARE_FIELD (task_struct, thread_group);
> +  LK_DECLARE_FIELD (task_struct, comm);
> +  LK_DECLARE_FIELD (task_struct, thread);
> +
> +  LK_DECLARE_FIELD (list_head, next);
> +  LK_DECLARE_FIELD (list_head, prev);
> +
> +  LK_DECLARE_FIELD (rq, curr);
> +
> +  LK_DECLARE_FIELD (cpumask, bits);
> +
> +  LK_DECLARE_ADDR (init_task);
> +  LK_DECLARE_ADDR (runqueues);
> +  LK_DECLARE_ADDR (__per_cpu_offset);
> +  LK_DECLARE_ADDR (init_mm);
> +
> +  LK_DECLARE_ADDR_ALIAS (__cpu_online_mask, cpu_online_mask);	/* linux 4.5+ */
> +  LK_DECLARE_ADDR_ALIAS (cpu_online_bits, cpu_online_mask);	/* linux -4.4 */
> +  if (LK_ADDR (cpu_online_mask) == -1)
> +    error (_("Could not find address cpu_online_mask.  Aborting."));
> +}
> +

> +
> +/* Initialize linux kernel target.  */
> +
> +static void
> +init_linux_kernel_ops (void)
> +{
> +  struct target_ops *t;
> +
> +  if (linux_kernel_ops != NULL)
> +    return;
> +
> +  t = XCNEW (struct target_ops);
> +  t->to_shortname = "linux-kernel";
> +  t->to_longname = "linux kernel support";
> +  t->to_doc = "Adds support to debug the Linux kernel";
> +
> +  /* set t->to_data = struct lk_private in lk_init_private.  */
> +
> +  t->to_open = lk_open;
> +  t->to_close = lk_close;
> +  t->to_detach = lk_detach;
> +  t->to_fetch_registers = lk_fetch_registers;
> +  t->to_update_thread_list = lk_update_thread_list;
> +  t->to_pid_to_str = lk_pid_to_str;
> +  t->to_thread_name = lk_thread_name;
> +
> +  t->to_stratum = thread_stratum;
> +  t->to_magic = OPS_MAGIC;
> +
> +  linux_kernel_ops = t;
> +
> +  add_target (t);
> +}
> +
> +/* Provide a prototype to silence -Wmissing-prototypes.  */
> +extern initialize_file_ftype _initialize_linux_kernel;
> +
> +void
> +_initialize_linux_kernel (void)
> +{
> +  init_linux_kernel_ops ();
> +
> +  observer_attach_new_objfile (lk_observer_new_objfile);
> +  observer_attach_inferior_created (lk_observer_inferior_created);
> +}
> diff --git a/gdb/lk-low.h b/gdb/lk-low.h
> new file mode 100644
> index 0000000..292ef97
> --- /dev/null
> +++ b/gdb/lk-low.h
> @@ -0,0 +1,310 @@
> +/* Basic Linux kernel support, architecture independent.
> +
> +   Copyright (C) 2016 Free Software Foundation, Inc.
> +
> +   This file is part of GDB.
> +
> +   This program is free software; you can redistribute it and/or modify
> +   it under the terms of the GNU General Public License as published by
> +   the Free Software Foundation; either version 3 of the License, or
> +   (at your option) any later version.
> +
> +   This program is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +   GNU General Public License for more details.
> +
> +   You should have received a copy of the GNU General Public License
> +   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
> +
> +#ifndef __LK_LOW_H__
> +#define __LK_LOW_H__
> +
> +#include "target.h"
> +
> +extern struct target_ops *linux_kernel_ops;
> +
> +/* Copy constants defined in Linux kernel.  */
> +#define LK_TASK_COMM_LEN 16
> +#define LK_BITS_PER_BYTE 8
> +
> +/* Definitions used in linux kernel target.  */
> +#define LK_CPU_INVAL -1U
> +
> +/* Private data structs for this target.  */
> +/* Forward declarations.  */
> +struct lk_private_hooks;
> +struct lk_ptid_map;
> +
> +/* Short hand access to private data.  */
> +#define LK_PRIVATE ((struct lk_private *) linux_kernel_ops->to_data)
> +#define LK_HOOK (LK_PRIVATE->hooks)
> +
> +struct lk_private

"private" here is a little confusing.  How about rename it to "linux_kernel"?

> +{
> +  /* Hashtab for needed addresses, structs and fields.  */
> +  htab_t data;
> +
> +  /* Linked list to map between cpu number and original ptid from target
> +     beneath.  */
> +  struct lk_ptid_map *old_ptid;
> +
> +  /* Hooks for architecture dependent functions.  */
> +  struct lk_private_hooks *hooks;
> +};
> +

Secondly, can we change it to a class and function pointers in
lk_private_hooks become virtual functions.  gdbarch_lk_init_private
returns a pointer to an instance of sub-class of "linux_kernel".

lk_init_private_data can be put the constructor of base class, to add
entries to "data", and sub-class (in each gdbarch) can add their own
specific stuff.

> +
> +/* Functions to initialize private data.  Do not use directly, use the
> +   macros below instead.  */
> +
> +extern struct lk_private_data *lk_init_addr (const char *name,
> +					     const char *alias, int silent);
> +extern struct lk_private_data *lk_init_struct (const char *name,
> +					       const char *alias, int silent);

> +
> +/* Definitions for architecture dependent hooks.  */
> +/* Hook to read registers from the target and supply their content
> +   to the regcache.  */
> +typedef void (*lk_hook_get_registers) (CORE_ADDR task,
> +				       struct target_ops *target,
> +				       struct regcache *regcache,
> +				       int regnum);
> +
> +/* Hook to return the per_cpu_offset of cpu CPU.  Only architectures that
> +   do not use the __per_cpu_offset array to determin the offset have to
> +   supply this hook.  */
> +typedef CORE_ADDR (*lk_hook_get_percpu_offset) (unsigned int cpu);
> +
> +/* Hook to map a running task to a logical CPU.  Required if the target
> +   beneath uses a different PID as struct rq.  */
> +typedef unsigned int (*lk_hook_map_running_task_to_cpu) (struct thread_info *ti);
> +
> +struct lk_private_hooks
> +{
> +  /* required */
> +  lk_hook_get_registers get_registers;
> +
> +  /* optional, required if __per_cpu_offset array is not used to determine
> +     offset.  */
> +  lk_hook_get_percpu_offset get_percpu_offset;
> +
> +  /* optional, required if the target beneath uses a different PID as struct
> +     rq.  */
> +  lk_hook_map_running_task_to_cpu map_running_task_to_cpu;
> +};
  
Omair Javaid May 3, 2017, 2:12 p.m. UTC | #5
On 24 April 2017 at 20:24, Andreas Arnez <arnez@linux.vnet.ibm.com> wrote:
> On Thu, Apr 20 2017, Omair Javaid wrote:
>
>> Hi Philipp and Andreas,
>>
>> I have some further comments on this patch specifically about copying
>> task_struct->pid into ptid->lwp and using task_struct address as tid.
>>
>> I see that we are overriding lwp, tid which any target beneath might
>> be using differently.
>>
>> So suggestion about storing task_struct->pid or task_struct address is
>> to use private_thread_info in binutils-gdb/gdb/gdbthread.h for this
>> information.
>
> The current version of the patch series is mainly focused on dump
> targets.  Remote targets require some additional changes.  We've
> discussed the use of private_thread_info before, and the last I've seen
> is that it is not suitable either, because remote.c uses it already:
>
>   https://sourceware.org/ml/gdb-patches/2017-02/msg00543.html
>
> In my view, the private_thread_info field really is a hack, and we are
> now facing its limits.  It provides some space for a single thread layer
> to store information into, but not for multiple such layers.  In the
> case of the Linux kernel we at least have two different thread layers:
> the CPU layer (each "thread" is a CPU), and the kernel task layer on top
> of that.
>
> I think we need to allow a target to maintain its *own* thread list.
> The CPU "thread list" would be maintained by the target beneath
> (remote/dump), and the kernel task list would be maintained by the LK
> target.  The ptid namespaces could be completely separate.

Hi Philip and Andreas,

Further more on this topic, remote stub assigns a common pid to all
CPU threads and uses LWP as CPU number.
While tid is left zero. I think we ll have to rework the old to new
ptid mapping mechanism a little bit in order to adjust all types of
targets beneath.

In your implementation of lk_init_ptid_map() you are testing pid from
task_struct against lwp of set by target target beneath. In case of
remote this will never be equal to pid as it is marked as the cpu
number.

Also in your implementation of lk_update_running_tasks lwp is being
updated with pid read from task_struct and tid is the task struct
address.
We are doing this not only for linux thread layer tasks but also for
CPU tasks in lk_update_running_tasks. This causes some sync issues
while on live targets as every time we halt a new thread is reported
by remote.

I think linux thread layer should only update tasks it has created
itself i-e tasks created by parsing task_struct. We can devise a
mechanism to map CPU tasks to curr task_struct and leave CPU tasks as
they were created by target beneath with same ptids.

Whats your take on this?


>
>> I also have reservation about use of old_ptid naming in struct
>> lk_private and struct lk_ptid_map.
>>
>> old_ptid naming is a little confusing kindly choose a distinguishable
>> name for old_ptid varibles in both lk_private and lk_ptid_map.
>>
>> Further Here's an implementation of bitmap_weight function from linux
>> kernel. Kindly see if your implementation can be improved and moved to
>> a generic area in gdb.
>>
>>  10 int __bitmap_weight(const unsigned long *bitmap, int bits)
>>  11 {
>>  12         int k, w = 0, lim = bits/BITS_PER_LONG;
>>  13
>>  14         for (k = 0; k < lim; k++)
>>  15                 w += hweight_long(bitmap[k]);
>>  16
>>  17         if (bits % BITS_PER_LONG)
>>  18                 w += hweight_long(bitmap[k] & BITMAP_LAST_WORD_MASK(bits));
>>  19
>>  20         return w;
>>  21 }
>
> The __bitmap_weight function is specific to Linux, so I'm not sure we
> want to move it to a generic area.  For big-endian targets the function
> depends on the width of Linux' "unsigned long" type, because
> BITMAP_LAST_WORD_MASK builds a mask for the *least significant* bits
> instead of the *lowest-addressed* ones.
>
> It's probably true that the performance of lk_bitmap_hweight could be
> improved.  For instance, with some care a function like popcount_hwi()
> in GCC's hwint.h could be exploited, even if the target's word width and
> byte order may not match the GDB client's.  This would not make the
> function simpler, though.
>
> --
> Andreas
>
  
Philipp Rudo May 3, 2017, 2:38 p.m. UTC | #6
Hi Omair,

sorry for the late reply but I was sick the last two weeks...

On Mon, 17 Apr 2017 03:58:35 +0500
Omair Javaid <omair.javaid@linaro.org> wrote:

> Hi Philip,
> 
> I like your handling of linux kernel data structures though I havent
> been able to get your code working on arm.

I'm glad to hear it.

> There are some challenges with regards to live debugging support which
> I am trying to figure out. There is no reliable way to tell between a
> kernel direct mapped address, vmalloc address and module address when
> we also have user address available.

I'm afraid you will find more challenges like these.  Although I tried to be as
general as possible I most likely missed some things you will need for live
debugging ...

I don't think there is a reliable way to tell what kind of address you have
when you only have its unsigned long value.  On s390 (at least in theory) it
should be possible to find out via lowcore ("percpu state").  There the
currently running task is stored.  So you could try something like
lowcore->current_task->(mm|active_mm).

I actually never needed to find out what kind of address I have as for dumps the
user space is usually stripped off.  This makes live a lot easier as both
(direct mapped and vmalloc'ed addresses (including modules)) can be accessed
via the kernels page table (init_mm->pgd).  So you don't have to
care about what kind of address you have.  Furthermore they never get swapped
out.

In theory this method could also be used to access user space memory if you
know the corresponding page table and the memory isn't swapped out.  But here
GDB has the problem, that to_xfer_partial (in gdb/target.h) only gets the
address as unsigned long but has absolutely no information about the address
space it lives in ...

 
> Also there is no way to switch between stratums if we need to do so in
> case we try to support switching between userspace and kernel space.

No, GDB currently doesn't allow switching between straums.  This is one point
on my long ToDo-list...

> As far as this patch is concerned there are no major issues that can
> block me from further progressing towards live debugging support.
> 
> I have compiled this patch with arm support on top and overall this
> looks good. See some minor inline comments.
> 
> Yao: Kindly check if there are any coding convention or styling issues here.
> 
> PS: I have not looked at module support or s390 target specific code in
> detail.

No Problem just do one step after the other and don't waste too much time on
s390.  The architecture has some quirks ;)

> Thanks!
> 
> --
> Omair
> 
> 

[...]

> > +
> > +size_t
> > +lk_bitmap_find_next_bit (ULONGEST *bitmap, size_t size, size_t bit)
> > +{
> > +  size_t ulong_size, bits_per_ulong, elt;
> > +
> > +  ulong_size = lk_builtin_type_size (unsigned_long);
> > +  bits_per_ulong = ulong_size * LK_BITS_PER_BYTE;
> > +  elt = bit / bits_per_ulong;
> > +
> > +  while (bit < size)
> > +    {  
> 
> Will this be portable across endianess?

The generic implementation for the bitmap functions in the kernel rely on
BITMAP_LAST_WORD_MASK.  As Andreas mentioned earlier this macro creates a mask
for the least significant bits.  So this implementation should be portable. At
least it works on s390, which is big-endian.
 
> > +      /* FIXME: Explain why using lsb0 bit order.  */
> > +      if (bitmap[elt] & (1UL << (bit % bits_per_ulong)))
> > +       return bit;
> > +
> > +      bit++;
> > +      if (bit % bits_per_ulong == 0)
> > +       elt++;
> > +    }
> > +
> > +  return size;
> > +}
> > +  
> 
> lk_bitmap_hweight seems un-used.
> I wonder if there is generic implementation available for this
> function somewhere in binutils-gdb sources.
> Can we use something like __builtin_popcount from GCC intrinsic?

See reply to your next mail.
 
> > +/* Returns the Hamming weight, i.e. number of set bits, of bitmap BITMAP
> > +   with size SIZE (in bits).  */
> > +
> > +size_t
> > +lk_bitmap_hweight (ULONGEST *bitmap, size_t size)
> > +{
> > +  size_t ulong_size, bit, bits_per_ulong, elt, retval;
> > +
> > +  ulong_size = lk_builtin_type_size (unsigned_long);
> > +  bits_per_ulong = ulong_size * LK_BITS_PER_BYTE;
> > +  elt = bit = 0;
> > +  retval = 0;
> > +
> > +  while (bit < size)
> > +    {
> > +      if (bitmap[elt] & (1 << bit % bits_per_ulong))
> > +       retval++;
> > +
> > +      bit++;
> > +      if (bit % bits_per_ulong == 0)
> > +       elt++;
> > +    }
> > +
> > +  return retval;
> > +}
> > +

[...]

> This function throws an error while compiling for arm-linux target on
> x86_64 host.
> 
> lk-low.c: In function ‘void init_linux_kernel_ops()’:
> lk-low.c:812:20: error: invalid conversion from ‘char*
> (*)(target_ops*, ptid_t)’ to ‘const char* (*)(target_ops*, ptid_t)’
> [-fpermissive]
>    t->to_pid_to_str = lk_pid_to_str;

Could it be that you applied the patch on a master with Pedros "Enable
-Wwrite-strings" series?

https://sourceware.org/ml/gdb-patches/2017-04/msg00028.html

In this series the hook got "const-ified" (7a1149643).  It isn't considered in
my patches yet (aka. need to rebase to the current master, but that will be a
lot of work with all the C++-ification ...).
 
> > +/* Function for targets to_pid_to_str hook.  Marks running tasks with an
> > +   asterisk "*".  */
> > +
> > +static char *
> > +lk_pid_to_str (struct target_ops *target, ptid_t ptid)
> > +{
> > +  static char buf[64];
> > +  long pid;
> > +  CORE_ADDR task;
> > +
> > +  pid = ptid_get_lwp (ptid);
> > +  task = (CORE_ADDR) ptid_get_tid (ptid);
> > +
> > +  xsnprintf (buf, sizeof (buf), "PID: %5li%s, 0x%s",
> > +            pid, ((lk_task_running (task) != LK_CPU_INVAL) ? "*" : ""),
> > +            phex (task, lk_builtin_type_size (unsigned_long)));
> > +
> > +  return buf;
> > +}

[...]

> Nice to have comments for all structs/fields below, a kernel tree
> reference may be?

I'm not 100% sure what you mean.  Is it a comment like
"/* Defined in <linux>/include/linux/sched.h.  */"?

If so I don't think its necessary as you could quickly grep for the
definition.  In particular I am afraid that after a while, when fields get
renamed and moved from one file to an other, the comments will confuse more
than they help.  That's why I haven't added any comments here (besides in what
linux version a field is valid).

> > +  LK_DECLARE_FIELD (task_struct, tasks);
> > +  LK_DECLARE_FIELD (task_struct, pid);
> > +  LK_DECLARE_FIELD (task_struct, tgid);
> > +  LK_DECLARE_FIELD (task_struct, thread_group);
> > +  LK_DECLARE_FIELD (task_struct, comm);
> > +  LK_DECLARE_FIELD (task_struct, thread);
> > +
> > +  LK_DECLARE_FIELD (list_head, next);
> > +  LK_DECLARE_FIELD (list_head, prev);
> > +
> > +  LK_DECLARE_FIELD (rq, curr);
> > +
> > +  LK_DECLARE_FIELD (cpumask, bits);
> > +
> > +  LK_DECLARE_ADDR (init_task);
> > +  LK_DECLARE_ADDR (runqueues);
> > +  LK_DECLARE_ADDR (__per_cpu_offset);
> > +  LK_DECLARE_ADDR (init_mm);
> > +
> > +  LK_DECLARE_ADDR_ALIAS (__cpu_online_mask, cpu_online_mask);  /* linux
> > 4.5+ */
> > +  LK_DECLARE_ADDR_ALIAS (cpu_online_bits, cpu_online_mask);    /* linux
> > -4.4 */
> > +  if (LK_ADDR (cpu_online_mask) == -1)
> > +    error (_("Could not find address cpu_online_mask.  Aborting."));
> > +}

[...]

> > +/* Initialize the cpu to old ptid map.  Prefer the arch dependent
> > +   map_running_task_to_cpu hook if provided, else assume that the PID used
> > +   by target beneath is the same as in task_struct PID task_struct.  See
> > +   comment on lk_ptid_map in lk-low.h for details.  */
> > +
> > +static void
> > +lk_init_ptid_map ()
> > +{
> > +  struct thread_info *ti;
> > +  ULONGEST *cpu_online_mask;
> > +  size_t size;
> > +  unsigned int cpu;
> > +  struct cleanup *old_chain;
> > +
> > +  if (LK_PRIVATE->old_ptid != NULL)
> > +    lk_free_ptid_map ();
> > +
> > +  size = LK_BITMAP_SIZE (cpumask);
> > +  cpu_online_mask = lk_read_bitmap (LK_ADDR (cpu_online_mask), size);
> > +  old_chain = make_cleanup (xfree, cpu_online_mask);
> > +
> > +  ALL_THREADS (ti)
> > +    {
> > +      struct lk_ptid_map *ptid_map = XCNEW (struct lk_ptid_map);
> > +      CORE_ADDR rq, curr;
> > +      int pid;
> > +
> > +      /* Give the architecture a chance to overwrite default behaviour.  */
> > +      if (LK_HOOK->map_running_task_to_cpu)
> > +       {
> > +         ptid_map->cpu = LK_HOOK->map_running_task_to_cpu (ti);
> > +       }
> > +      else
> > +       {
> > +         LK_BITMAP_FOR_EACH_SET_BIT (cpu_online_mask, size, cpu)
> > +           {
> > +             rq = LK_ADDR (runqueues) + lk_get_percpu_offset (cpu);
> > +             curr = lk_read_addr (rq + LK_OFFSET (rq, curr));
> > +             pid = lk_read_int (curr + LK_OFFSET (task_struct, pid));
> > +
> > +             if (pid == ptid_get_lwp (ti->ptid))
> > +               {
> > +                 ptid_map->cpu = cpu;
> > +                 break;
> > +               }
> > +           }
> > +         if (cpu == size)
> > +           error (_("Could not map thread with pid %d, lwp %lu to a cpu."),
> > +                  ti->ptid.pid, ti->ptid.lwp);  
> 
> Accessing pid and lwp fields directly is not recommended. May be use
> something like
>          error (_("Could not map thread with pid %d, lwp %ld to a cpu."),
>                ptid_get_pid (ti->ptid), ptid_get_lwp (ti->ptid));

Yes, you are right.  I did the quick and dirty solution here because I already
knew that this ptid_map "solution" would only be temporary.  Besides ptid_t got
classified and it's API changed.  So this needs to be adjusted anyway when I
rebase to the current master...

[...]

> > +/* Hook to return the per_cpu_offset of cpu CPU.  Only architectures that
> > +   do not use the __per_cpu_offset array to determin the offset have to
> > +   supply this hook.  */  
> 
> ^Typo in comment.

Thanks, fixed.

> Also if its not too much trouble can you kindly put Linux kernel
> source tree references like
> __per_cpu_offset: Linux/include/asm-generic/percpu.h 4.10. in comments.

Like above.  I'm not sure if it is worth the effort and if outdated comments
confuse more than help.
 
[...]

> > +/* Divides numinator N by demoniator D and rounds up the result.  */  
> 
> ^ Spell check above.

What the hell did I do here?  Thanks for the hint, fixed.
 
Hope I didn't miss anything.

Thanks
Philipp
  
Philipp Rudo May 3, 2017, 2:38 p.m. UTC | #7
Hi Omair,


On Thu, 20 Apr 2017 16:08:57 +0500
Omair Javaid <omair.javaid@linaro.org> wrote:

> Hi Philipp and Andreas,
> 
> I have some further comments on this patch specifically about copying
> task_struct->pid into ptid->lwp and using task_struct address as tid.
> 
> I see that we are overriding lwp, tid which any target beneath might
> be using differently.
> 
> So suggestion about storing task_struct->pid or task_struct address is
> to use private_thread_info in binutils-gdb/gdb/gdbthread.h for this
> information.

You are right that other targets might overwrite ptid->lwp/tid fields.  But the
same they can do (and remote.c does) with the private_tread_info.  As Andreas
already pointed out this is a limitation in GDB we are facing.  The only proper
solution I see is to allow every target to manage its own thread_list.
Otherwise there is always the possibility that an other target interferes with
you.
 
> I also have reservation about use of old_ptid naming in struct
> lk_private and > +struct lk_ptid_map.
> 
> old_ptid naming is a little confusing kindly choose a distinguishable
> name for old_ptid varibles in both lk_private and lk_ptid_map.

The old_ptid/lk_ptid_map is a crude hack to show that kernel debugging is
possible at all.  It was never meant to be permanent.  So feel free to change
anything you like.
 
> Further Here's an implementation of bitmap_weight function from linux
> kernel. Kindly see if your implementation can be improved and moved to
> a generic area in gdb.
> 
>  10 int __bitmap_weight(const unsigned long *bitmap, int bits)
>  11 {
>  12         int k, w = 0, lim = bits/BITS_PER_LONG;
>  13
>  14         for (k = 0; k < lim; k++)
>  15                 w += hweight_long(bitmap[k]);
>  16
>  17         if (bits % BITS_PER_LONG)
>  18                 w += hweight_long(bitmap[k] &
> BITMAP_LAST_WORD_MASK(bits)); 19
>  20         return w;
>  21 }

You could possibly improve performance in my implementation.  Although I don't
think it will be very much.  When you look at the function above in more detail
you'll see that hweight_long is a macro checking each bit on its own (see
include/asm-generic/bitops/const_weight.h).  The problem is that GDB doesn't
know the length of a long from the target system during compile time.  Thus it
has to be determined in a loop during run time.

Nevertheless we could improve performance by going though the array byte-
instead of bit-wise.  I think assuming a byte with 8 bits is pretty safe, even
in the future.
An other possible "solution" is to remove the function completely.  As you said
in your previous mail lk_bitmap_hweight is currently not used.  I implemented
it because I thought it would be handy if we e.g. need to allocate memory
for percpu data.  But (at least currently) this is not the case.

Thanks

Philipp

 
> Thanks!
> 
> --
> Omair.
> 
> On 16 March 2017 at 21:57, Philipp Rudo <prudo@linux.vnet.ibm.com> wrote:
> > This patch implements a basic target_ops for Linux kernel support. In
> > particular it models Linux tasks as GDB threads such that you are able to
> > change to a given thread, get backtraces, disassemble the current frame
> > etc..
> >
> > Currently the target_ops is designed only to work with static targets, i.e.
> > dumps. Thus it lacks implementation for hooks like to_wait, to_resume or
> > to_store_registers. Furthermore the mapping between a CPU and the
> > task_struct of the running task is only be done once at initialization. See
> > cover letter for a detailed discussion.
> >
> > Nevertheless i made some design decisions different to Peter [1] which are
> > worth discussing. Especially storing the private data in a htab (or
> > std::unordered_map if i had the time...) instead of global variables makes
> > the code much nicer and less memory consuming.
> >
> > [1] https://sourceware.org/ml/gdb-patches/2016-12/msg00382.html
> >
> > gdb/ChangeLog:
> >
> >     * gdbarch.sh (lk_init_private): New hook.
> >     * gdbarch.h: Regenerated.
> >     * gdbarch.c: Regenerated.
> >     * lk-low.h: New file.
> >     * lk-low.c: New file.
> >     * lk-lists.h: New file.
> >     * lk-lists.c: New file.
> >     * Makefile.in (SFILES, ALLDEPFILES): Add lk-low.c and lk-lists.c.
> >     (HFILES_NO_SRCDIR): Add lk-low.h and lk-lists.h.
> >     (ALL_TARGET_OBS): Add lk-low.o and lk-lists.o.
> >     * configure.tgt (lk_target_obs): New variable with object files for
> > Linux kernel support.
> >       (s390*-*-linux*): Add lk_target_obs.
> > ---
> >  gdb/Makefile.in   |   8 +
> >  gdb/configure.tgt |   6 +-
> >  gdb/gdbarch.c     |  31 ++
> >  gdb/gdbarch.h     |   7 +
> >  gdb/gdbarch.sh    |   4 +
> >  gdb/lk-lists.c    |  47 +++
> >  gdb/lk-lists.h    |  56 ++++
> >  gdb/lk-low.c      | 833
> > ++++++++++++++++++++++++++++++++++++++++++++++++++++++ gdb/lk-low.h      |
> > 310 ++++++++++++++++++++ 9 files changed, 1301 insertions(+), 1 deletion(-)
> >  create mode 100644 gdb/lk-lists.c
> >  create mode 100644 gdb/lk-lists.h
> >  create mode 100644 gdb/lk-low.c
> >  create mode 100644 gdb/lk-low.h
> >
> > diff --git a/gdb/Makefile.in b/gdb/Makefile.in
> > index 0818742..9387c66 100644
> > --- a/gdb/Makefile.in
> > +++ b/gdb/Makefile.in
> > @@ -817,6 +817,8 @@ ALL_TARGET_OBS = \
> >         iq2000-tdep.o \
> >         linux-record.o \
> >         linux-tdep.o \
> > +       lk-lists.o \
> > +       lk-low.o \
> >         lm32-tdep.o \
> >         m32c-tdep.o \
> >         m32r-linux-tdep.o \
> > @@ -1103,6 +1105,8 @@ SFILES = \
> >         jit.c \
> >         language.c \
> >         linespec.c \
> > +       lk-lists.c \
> > +       lk-low.c \
> >         location.c \
> >         m2-exp.y \
> >         m2-lang.c \
> > @@ -1350,6 +1354,8 @@ HFILES_NO_SRCDIR = \
> >         linux-nat.h \
> >         linux-record.h \
> >         linux-tdep.h \
> > +       lk-lists.h \
> > +       lk-low.h \
> >         location.h \
> >         m2-lang.h \
> >         m32r-tdep.h \
> > @@ -2547,6 +2553,8 @@ ALLDEPFILES = \
> >         linux-fork.c \
> >         linux-record.c \
> >         linux-tdep.c \
> > +       lk-lists.c \
> > +       lk-low.c \
> >         lm32-tdep.c \
> >         m32r-linux-nat.c \
> >         m32r-linux-tdep.c \
> > diff --git a/gdb/configure.tgt b/gdb/configure.tgt
> > index cb909e7..8d87fea 100644
> > --- a/gdb/configure.tgt
> > +++ b/gdb/configure.tgt
> > @@ -34,6 +34,10 @@ case $targ in
> >      ;;
> >  esac
> >
> > +# List of objectfiles for Linux kernel support.  To be included into
> > *-linux* +# targets wich support Linux kernel debugging.
> > +lk_target_obs="lk-lists.o lk-low.o"
> > +
> >  # map target info into gdb names.
> >
> >  case "${targ}" in
> > @@ -479,7 +483,7 @@ powerpc*-*-*)
> >  s390*-*-linux*)
> >         # Target: S390 running Linux
> >         gdb_target_obs="s390-linux-tdep.o solib-svr4.o linux-tdep.o \
> > -                       linux-record.o"
> > +                       linux-record.o ${lk_target_obs}"
> >         build_gdbserver=yes
> >         ;;
> >
> > diff --git a/gdb/gdbarch.c b/gdb/gdbarch.c
> > index 87eafb2..5509a6c 100644
> > --- a/gdb/gdbarch.c
> > +++ b/gdb/gdbarch.c
> > @@ -349,6 +349,7 @@ struct gdbarch
> >    gdbarch_addressable_memory_unit_size_ftype *addressable_memory_unit_size;
> >    char ** disassembler_options;
> >    const disasm_options_t * valid_disassembler_options;
> > +  gdbarch_lk_init_private_ftype *lk_init_private;
> >  };
> >
> >  /* Create a new ``struct gdbarch'' based on information provided by
> > @@ -1139,6 +1140,12 @@ gdbarch_dump (struct gdbarch *gdbarch, struct
> > ui_file *file) "gdbarch_dump: iterate_over_regset_sections = <%s>\n",
> >                        host_address_to_string
> > (gdbarch->iterate_over_regset_sections)); fprintf_unfiltered (file,
> > +                      "gdbarch_dump: gdbarch_lk_init_private_p() = %d\n",
> > +                      gdbarch_lk_init_private_p (gdbarch));
> > +  fprintf_unfiltered (file,
> > +                      "gdbarch_dump: lk_init_private = <%s>\n",
> > +                      host_address_to_string (gdbarch->lk_init_private));
> > +  fprintf_unfiltered (file,
> >                        "gdbarch_dump: long_bit = %s\n",
> >                        plongest (gdbarch->long_bit));
> >    fprintf_unfiltered (file,
> > @@ -5008,6 +5015,30 @@ set_gdbarch_valid_disassembler_options (struct
> > gdbarch *gdbarch, gdbarch->valid_disassembler_options =
> > valid_disassembler_options; }
> >
> > +int
> > +gdbarch_lk_init_private_p (struct gdbarch *gdbarch)
> > +{
> > +  gdb_assert (gdbarch != NULL);
> > +  return gdbarch->lk_init_private != NULL;
> > +}
> > +
> > +void
> > +gdbarch_lk_init_private (struct gdbarch *gdbarch)
> > +{
> > +  gdb_assert (gdbarch != NULL);
> > +  gdb_assert (gdbarch->lk_init_private != NULL);
> > +  if (gdbarch_debug >= 2)
> > +    fprintf_unfiltered (gdb_stdlog, "gdbarch_lk_init_private called\n");
> > +  gdbarch->lk_init_private (gdbarch);
> > +}
> > +
> > +void
> > +set_gdbarch_lk_init_private (struct gdbarch *gdbarch,
> > +                             gdbarch_lk_init_private_ftype lk_init_private)
> > +{
> > +  gdbarch->lk_init_private = lk_init_private;
> > +}
> > +
> >
> >  /* Keep a registry of per-architecture data-pointers required by GDB
> >     modules.  */
> > diff --git a/gdb/gdbarch.h b/gdb/gdbarch.h
> > index 34f82a7..c03bf00 100644
> > --- a/gdb/gdbarch.h
> > +++ b/gdb/gdbarch.h
> > @@ -1553,6 +1553,13 @@ extern void set_gdbarch_disassembler_options (struct
> > gdbarch *gdbarch, char ** d
> >
> >  extern const disasm_options_t * gdbarch_valid_disassembler_options (struct
> > gdbarch *gdbarch); extern void set_gdbarch_valid_disassembler_options
> > (struct gdbarch *gdbarch, const disasm_options_t *
> > valid_disassembler_options); +/* Initiate architecture dependent private
> > data for the linux-kernel target. */ + +extern int
> > gdbarch_lk_init_private_p (struct gdbarch *gdbarch); +
> > +typedef void (gdbarch_lk_init_private_ftype) (struct gdbarch *gdbarch);
> > +extern void gdbarch_lk_init_private (struct gdbarch *gdbarch);
> > +extern void set_gdbarch_lk_init_private (struct gdbarch *gdbarch,
> > gdbarch_lk_init_private_ftype *lk_init_private);
> >
> >  /* Definition for an unknown syscall, used basically in error-cases.  */
> >  #define UNKNOWN_SYSCALL (-1)
> > diff --git a/gdb/gdbarch.sh b/gdb/gdbarch.sh
> > index 39b1f94..cad45d1 100755
> > --- a/gdb/gdbarch.sh
> > +++ b/gdb/gdbarch.sh
> > @@ -1167,6 +1167,10 @@
> > m:int:addressable_memory_unit_size:void:::default_addressable_memory_unit_size::
> > v:char **:disassembler_options:::0:0::0:pstring_ptr
> > (gdbarch->disassembler_options) v:const disasm_options_t
> > *:valid_disassembler_options:::0:0::0:host_address_to_string
> > (gdbarch->valid_disassembler_options)
> >
> > +# Initialize architecture dependent private data for the linux-kernel
> > +# target.
> > +M:void:lk_init_private:void:
> > +
> >  EOF
> >  }
> >
> > diff --git a/gdb/lk-lists.c b/gdb/lk-lists.c
> > new file mode 100644
> > index 0000000..55d11bd
> > --- /dev/null
> > +++ b/gdb/lk-lists.c
> > @@ -0,0 +1,47 @@
> > +/* Iterators for internal data structures of the Linux kernel.
> > +
> > +   Copyright (C) 2016 Free Software Foundation, Inc.
> > +
> > +   This file is part of GDB.
> > +
> > +   This program is free software; you can redistribute it and/or modify
> > +   it under the terms of the GNU General Public License as published by
> > +   the Free Software Foundation; either version 3 of the License, or
> > +   (at your option) any later version.
> > +
> > +   This program is distributed in the hope that it will be useful,
> > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > +   GNU General Public License for more details.
> > +
> > +   You should have received a copy of the GNU General Public License
> > +   along with this program.  If not, see <http://www.gnu.org/licenses/>.
> > */ +
> > +#include "defs.h"
> > +
> > +#include "inferior.h"
> > +#include "lk-lists.h"
> > +#include "lk-low.h"
> > +
> > +/* Returns next entry from struct list_head CURR while iterating field
> > +   SNAME->FNAME.  */
> > +
> > +CORE_ADDR
> > +lk_list_head_next (CORE_ADDR curr, const char *sname, const char *fname)
> > +{
> > +  CORE_ADDR next, next_prev;
> > +
> > +  /* We must always assume that the data we handle is corrupted.  Thus use
> > +     curr->next->prev == curr as sanity check.  */
> > +  next = lk_read_addr (curr + LK_OFFSET (list_head, next));
> > +  next_prev = lk_read_addr (next + LK_OFFSET (list_head, prev));
> > +
> > +  if (!curr || curr != next_prev)
> > +    {
> > +      error (_("Memory corruption detected while iterating list_head at "\
> > +              "0x%s belonging to list %s->%s."),
> > +            phex (curr, lk_builtin_type_size (unsigned_long)) , sname,
> > fname);
> > +    }
> > +
> > +  return next;
> > +}
> > diff --git a/gdb/lk-lists.h b/gdb/lk-lists.h
> > new file mode 100644
> > index 0000000..f9c2a85
> > --- /dev/null
> > +++ b/gdb/lk-lists.h
> > @@ -0,0 +1,56 @@
> > +/* Iterators for internal data structures of the Linux kernel.
> > +
> > +   Copyright (C) 2016 Free Software Foundation, Inc.
> > +
> > +   This file is part of GDB.
> > +
> > +   This program is free software; you can redistribute it and/or modify
> > +   it under the terms of the GNU General Public License as published by
> > +   the Free Software Foundation; either version 3 of the License, or
> > +   (at your option) any later version.
> > +
> > +   This program is distributed in the hope that it will be useful,
> > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > +   GNU General Public License for more details.
> > +
> > +   You should have received a copy of the GNU General Public License
> > +   along with this program.  If not, see <http://www.gnu.org/licenses/>.
> > */ +
> > +#ifndef __LK_LISTS_H__
> > +#define __LK_LISTS_H__
> > +
> > +extern CORE_ADDR lk_list_head_next (CORE_ADDR curr, const char *sname,
> > +                                   const char *fname);
> > +
> > +/* Iterator over field SNAME->FNAME of type struct list_head starting at
> > +   address START of type struct list_head.  This iterator is intended to be
> > +   used for lists initiated with macro LIST_HEAD (include/linux/list.h) in
> > +   the kernel, i.e. lists that START is a global variable of type struct
> > +   list_head and _not_ of type struct SNAME as the rest of the list.  Thus
> > +   START will not be iterated over but only be used to start/terminate the
> > +   iteration.  */
> > +
> > +#define lk_list_for_each(next, start, sname, fname)            \
> > +  for ((next) = lk_list_head_next ((start), #sname, #fname);   \
> > +       (next) != (start);                                      \
> > +       (next) = lk_list_head_next ((next), #sname, #fname))
> > +
> > +/* Iterator over struct SNAME linked together via field SNAME->FNAME of
> > type
> > +   struct list_head starting at address START of type struct SNAME.  In
> > +   contrast to the iterator above, START is a "full" member of the list and
> > +   thus will be iterated over.  */
> > +
> > +#define lk_list_for_each_container(cont, start, sname, fname)  \
> > +  CORE_ADDR _next;                                             \
> > +  bool _first_loop = true;                                     \
> > +  for ((cont) = (start),                                       \
> > +       _next = (start) + LK_OFFSET (sname, fname);             \
> > +                                                               \
> > +       (cont) != (start) || _first_loop;                       \
> > +                                                               \
> > +       _next = lk_list_head_next (_next, #sname, #fname),      \
> > +       (cont) = LK_CONTAINER_OF (_next, sname, fname),         \
> > +       _first_loop = false)
> > +
> > +#endif /* __LK_LISTS_H__ */
> > diff --git a/gdb/lk-low.c b/gdb/lk-low.c
> > new file mode 100644
> > index 0000000..768f228
> > --- /dev/null
> > +++ b/gdb/lk-low.c
> > @@ -0,0 +1,833 @@
> > +/* Basic Linux kernel support, architecture independent.
> > +
> > +   Copyright (C) 2016 Free Software Foundation, Inc.
> > +
> > +   This file is part of GDB.
> > +
> > +   This program is free software; you can redistribute it and/or modify
> > +   it under the terms of the GNU General Public License as published by
> > +   the Free Software Foundation; either version 3 of the License, or
> > +   (at your option) any later version.
> > +
> > +   This program is distributed in the hope that it will be useful,
> > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > +   GNU General Public License for more details.
> > +
> > +   You should have received a copy of the GNU General Public License
> > +   along with this program.  If not, see <http://www.gnu.org/licenses/>.
> > */ +
> > +#include "defs.h"
> > +
> > +#include "block.h"
> > +#include "exceptions.h"
> > +#include "frame.h"
> > +#include "gdbarch.h"
> > +#include "gdbcore.h"
> > +#include "gdbthread.h"
> > +#include "gdbtypes.h"
> > +#include "inferior.h"
> > +#include "lk-lists.h"
> > +#include "lk-low.h"
> > +#include "objfiles.h"
> > +#include "observer.h"
> > +#include "solib.h"
> > +#include "target.h"
> > +#include "value.h"
> > +
> > +#include <algorithm>
> > +
> > +struct target_ops *linux_kernel_ops = NULL;
> > +
> > +/* Initialize a private data entry for an address, where NAME is the name
> > +   of the symbol, i.e. variable name in Linux, ALIAS the name used to
> > +   retrieve the entry from hashtab, and SILENT a flag to determine if
> > +   errors should be ignored.
> > +
> > +   Returns a pointer to the new entry.  In case of an error, either returns
> > +   NULL (SILENT = TRUE) or throws an error (SILENT = FALSE).  If SILENT =
> > TRUE
> > +   the caller is responsible to check for errors.
> > +
> > +   Do not use directly, use LK_DECLARE_* macros defined in lk-low.h
> > instead.  */ +
> > +struct lk_private_data *
> > +lk_init_addr (const char *name, const char *alias, int silent)
> > +{
> > +  struct lk_private_data *data;
> > +  struct bound_minimal_symbol bmsym;
> > +  void **new_slot;
> > +  void *old_slot;
> > +
> > +  if ((old_slot = lk_find (alias)) != NULL)
> > +    return (struct lk_private_data *) old_slot;
> > +
> > +  bmsym = lookup_minimal_symbol (name, NULL, NULL);
> > +
> > +  if (bmsym.minsym == NULL)
> > +    {
> > +      if (!silent)
> > +       error (_("Could not find address %s.  Aborting."), alias);
> > +      return NULL;
> > +    }
> > +
> > +  data = XCNEW (struct lk_private_data);
> > +  data->alias = alias;
> > +  data->data.addr = BMSYMBOL_VALUE_ADDRESS (bmsym);
> > +
> > +  new_slot = lk_find_slot (alias);
> > +  *new_slot = data;
> > +
> > +  return data;
> > +}
> > +
> > +/* Same as lk_init_addr but for structs.  */
> > +
> > +struct lk_private_data *
> > +lk_init_struct (const char *name, const char *alias, int silent)
> > +{
> > +  struct lk_private_data *data;
> > +  const struct block *global;
> > +  const struct symbol *sym;
> > +  struct type *type;
> > +  void **new_slot;
> > +  void *old_slot;
> > +
> > +  if ((old_slot = lk_find (alias)) != NULL)
> > +    return (struct lk_private_data *) old_slot;
> > +
> > +  global = block_global_block(get_selected_block (0));
> > +  sym = lookup_symbol (name, global, STRUCT_DOMAIN, NULL).symbol;
> > +
> > +  if (sym != NULL)
> > +    {
> > +      type = SYMBOL_TYPE (sym);
> > +      goto out;
> > +    }
> > +
> > +  /*  Chek for "typedef struct { ... } name;"-like definitions.  */
> > +  sym = lookup_symbol (name, global, VAR_DOMAIN, NULL).symbol;
> > +  if (sym == NULL)
> > +    goto error;
> > +
> > +  type = check_typedef (SYMBOL_TYPE (sym));
> > +
> > +  if (TYPE_CODE (type) == TYPE_CODE_STRUCT)
> > +    goto out;
> > +
> > +error:
> > +  if (!silent)
> > +    error (_("Could not find %s.  Aborting."), alias);
> > +
> > +  return NULL;
> > +
> > +out:
> > +  data = XCNEW (struct lk_private_data);
> > +  data->alias = alias;
> > +  data->data.type = type;
> > +
> > +  new_slot = lk_find_slot (alias);
> > +  *new_slot = data;
> > +
> > +  return data;
> > +}
> > +
> > +/* Nearly the same as lk_init_addr, with the difference that two names are
> > +   needed, i.e. the struct name S_NAME containing the field with name
> > +   F_NAME.  */
> > +
> > +struct lk_private_data *
> > +lk_init_field (const char *s_name, const char *f_name,
> > +              const char *s_alias, const char *f_alias,
> > +              int silent)
> > +{
> > +  struct lk_private_data *data;
> > +  struct lk_private_data *parent;
> > +  struct field *first, *last, *field;
> > +  void **new_slot;
> > +  void *old_slot;
> > +
> > +  if ((old_slot = lk_find (f_alias)) != NULL)
> > +    return (struct lk_private_data *) old_slot;
> > +
> > +  parent = lk_find (s_alias);
> > +  if (parent == NULL)
> > +    {
> > +      parent = lk_init_struct (s_name, s_alias, silent);
> > +
> > +      /* Only SILENT == true needed, as otherwise lk_init_struct would
> > throw
> > +        an error.  */
> > +      if (parent == NULL)
> > +       return NULL;
> > +    }
> > +
> > +  first = TYPE_FIELDS (parent->data.type);
> > +  last = first + TYPE_NFIELDS (parent->data.type);
> > +  for (field = first; field < last; field ++)
> > +    {
> > +      if (streq (field->name, f_name))
> > +       break;
> > +    }
> > +
> > +  if (field == last)
> > +    {
> > +      if (!silent)
> > +       error (_("Could not find field %s->%s.  Aborting."), s_alias,
> > f_name);
> > +      return NULL;
> > +    }
> > +
> > +  data = XCNEW (struct lk_private_data);
> > +  data->alias = f_alias;
> > +  data->data.field = field;
> > +
> > +  new_slot = lk_find_slot (f_alias);
> > +  *new_slot = data;
> > +
> > +  return data;
> > +}
> > +
> > +/* Map cpu number CPU to the original PTID from target beneath.  */
> > +
> > +static ptid_t
> > +lk_cpu_to_old_ptid (const int cpu)
> > +{
> > +  struct lk_ptid_map *ptid_map;
> > +
> > +  for (ptid_map = LK_PRIVATE->old_ptid; ptid_map;
> > +       ptid_map = ptid_map->next)
> > +    {
> > +      if (ptid_map->cpu == cpu)
> > +       return ptid_map->old_ptid;
> > +    }
> > +
> > +  error (_("Could not map CPU %d to original PTID.  Aborting."), cpu);
> > +}
> > +
> > +/* Helper functions to read and return basic types at a given ADDRess.  */
> > +
> > +/* Read and return the integer value at address ADDR.  */
> > +
> > +int
> > +lk_read_int (CORE_ADDR addr)
> > +{
> > +  size_t int_size = lk_builtin_type_size (int);
> > +  enum bfd_endian endian = gdbarch_byte_order (current_inferior
> > ()->gdbarch);
> > +  return read_memory_integer (addr, int_size, endian);
> > +}
> > +
> > +/* Read and return the unsigned integer value at address ADDR.  */
> > +
> > +unsigned int
> > +lk_read_uint (CORE_ADDR addr)
> > +{
> > +  size_t uint_size = lk_builtin_type_size (unsigned_int);
> > +  enum bfd_endian endian = gdbarch_byte_order (current_inferior
> > ()->gdbarch);
> > +  return read_memory_integer (addr, uint_size, endian);
> > +}
> > +
> > +/* Read and return the long integer value at address ADDR.  */
> > +
> > +LONGEST
> > +lk_read_long (CORE_ADDR addr)
> > +{
> > +  size_t long_size = lk_builtin_type_size (long);
> > +  enum bfd_endian endian = gdbarch_byte_order (current_inferior
> > ()->gdbarch);
> > +  return read_memory_integer (addr, long_size, endian);
> > +}
> > +
> > +/* Read and return the unsigned long integer value at address ADDR.  */
> > +
> > +ULONGEST
> > +lk_read_ulong (CORE_ADDR addr)
> > +{
> > +  size_t ulong_size = lk_builtin_type_size (unsigned_long);
> > +  enum bfd_endian endian = gdbarch_byte_order (current_inferior
> > ()->gdbarch);
> > +  return read_memory_unsigned_integer (addr, ulong_size, endian);
> > +}
> > +
> > +/* Read and return the address value at address ADDR.  */
> > +
> > +CORE_ADDR
> > +lk_read_addr (CORE_ADDR addr)
> > +{
> > +  return (CORE_ADDR) lk_read_ulong (addr);
> > +}
> > +
> > +/* Reads a bitmap at a given ADDRess of size SIZE (in bits). Allocates and
> > +   returns an array of ulongs.  The caller is responsible to free the array
> > +   after it is no longer needed.  */
> > +
> > +ULONGEST *
> > +lk_read_bitmap (CORE_ADDR addr, size_t size)
> > +{
> > +  ULONGEST *bitmap;
> > +  size_t ulong_size, len;
> > +
> > +  ulong_size = lk_builtin_type_size (unsigned_long);
> > +  len = LK_DIV_ROUND_UP (size, ulong_size * LK_BITS_PER_BYTE);
> > +  bitmap = XNEWVEC (ULONGEST, len);
> > +
> > +  for (size_t i = 0; i < len; i++)
> > +    bitmap[i] = lk_read_ulong (addr + i * ulong_size);
> > +
> > +  return bitmap;
> > +}
> > +
> > +/* Return the next set bit in bitmap BITMAP of size SIZE (in bits)
> > +   starting from bit (index) BIT.  Return SIZE when the end of the bitmap
> > +   was reached.  To iterate over all set bits use macro
> > +   LK_BITMAP_FOR_EACH_SET_BIT defined in lk-low.h.  */
> > +
> > +size_t
> > +lk_bitmap_find_next_bit (ULONGEST *bitmap, size_t size, size_t bit)
> > +{
> > +  size_t ulong_size, bits_per_ulong, elt;
> > +
> > +  ulong_size = lk_builtin_type_size (unsigned_long);
> > +  bits_per_ulong = ulong_size * LK_BITS_PER_BYTE;
> > +  elt = bit / bits_per_ulong;
> > +
> > +  while (bit < size)
> > +    {
> > +      /* FIXME: Explain why using lsb0 bit order.  */
> > +      if (bitmap[elt] & (1UL << (bit % bits_per_ulong)))
> > +       return bit;
> > +
> > +      bit++;
> > +      if (bit % bits_per_ulong == 0)
> > +       elt++;
> > +    }
> > +
> > +  return size;
> > +}
> > +
> > +/* Returns the Hamming weight, i.e. number of set bits, of bitmap BITMAP
> > +   with size SIZE (in bits).  */
> > +
> > +size_t
> > +lk_bitmap_hweight (ULONGEST *bitmap, size_t size)
> > +{
> > +  size_t ulong_size, bit, bits_per_ulong, elt, retval;
> > +
> > +  ulong_size = lk_builtin_type_size (unsigned_long);
> > +  bits_per_ulong = ulong_size * LK_BITS_PER_BYTE;
> > +  elt = bit = 0;
> > +  retval = 0;
> > +
> > +  while (bit < size)
> > +    {
> > +      if (bitmap[elt] & (1 << bit % bits_per_ulong))
> > +       retval++;
> > +
> > +      bit++;
> > +      if (bit % bits_per_ulong == 0)
> > +       elt++;
> > +    }
> > +
> > +  return retval;
> > +}
> > +
> > +/* Provide the per_cpu_offset of cpu CPU.  See comment in lk-low.h for
> > +   details.  */
> > +
> > +CORE_ADDR
> > +lk_get_percpu_offset (unsigned int cpu)
> > +{
> > +  size_t ulong_size = lk_builtin_type_size (unsigned_long);
> > +  CORE_ADDR percpu_elt;
> > +
> > +  /* Give the architecture a chance to overwrite default behaviour.  */
> > +  if (LK_HOOK->get_percpu_offset)
> > +      return LK_HOOK->get_percpu_offset (cpu);
> > +
> > +  percpu_elt = LK_ADDR (__per_cpu_offset) + (ulong_size * cpu);
> > +  return lk_read_addr (percpu_elt);
> > +}
> > +
> > +
> > +/* Test if a given task TASK is running.  See comment in lk-low.h for
> > +   details.  */
> > +
> > +unsigned int
> > +lk_task_running (CORE_ADDR task)
> > +{
> > +  ULONGEST *cpu_online_mask;
> > +  size_t size;
> > +  unsigned int cpu;
> > +  struct cleanup *old_chain;
> > +
> > +  size = LK_BITMAP_SIZE (cpumask);
> > +  cpu_online_mask = lk_read_bitmap (LK_ADDR (cpu_online_mask), size);
> > +  old_chain = make_cleanup (xfree, cpu_online_mask);
> > +
> > +  LK_BITMAP_FOR_EACH_SET_BIT (cpu_online_mask, size, cpu)
> > +    {
> > +      CORE_ADDR rq;
> > +      CORE_ADDR curr;
> > +
> > +      rq = LK_ADDR (runqueues) + lk_get_percpu_offset (cpu);
> > +      curr = lk_read_addr (rq + LK_OFFSET (rq, curr));
> > +
> > +      if (curr == task)
> > +       break;
> > +    }
> > +
> > +  if (cpu == size)
> > +    cpu = LK_CPU_INVAL;
> > +
> > +  do_cleanups (old_chain);
> > +  return cpu;
> > +}
> > +
> > +/* Update running tasks with information from struct rq->curr. */
> > +
> > +static void
> > +lk_update_running_tasks ()
> > +{
> > +  ULONGEST *cpu_online_mask;
> > +  size_t size;
> > +  unsigned int cpu;
> > +  struct cleanup *old_chain;
> > +
> > +  size = LK_BITMAP_SIZE (cpumask);
> > +  cpu_online_mask = lk_read_bitmap (LK_ADDR (cpu_online_mask), size);
> > +  old_chain = make_cleanup (xfree, cpu_online_mask);
> > +
> > +  LK_BITMAP_FOR_EACH_SET_BIT (cpu_online_mask, size, cpu)
> > +    {
> > +      struct thread_info *tp;
> > +      CORE_ADDR rq, curr;
> > +      LONGEST pid, inf_pid;
> > +      ptid_t new_ptid, old_ptid;
> > +
> > +      rq = LK_ADDR (runqueues) + lk_get_percpu_offset (cpu);
> > +      curr = lk_read_addr (rq + LK_OFFSET (rq, curr));
> > +      pid = lk_read_int (curr + LK_OFFSET (task_struct, pid));
> > +      inf_pid = current_inferior ()->pid;
> > +
> > +      new_ptid = ptid_build (inf_pid, pid, curr);
> > +      old_ptid = lk_cpu_to_old_ptid (cpu); /* FIXME not suitable for
> > +                                             running targets? */
> > +
> > +      tp = find_thread_ptid (old_ptid);
> > +      if (tp && tp->state != THREAD_EXITED)
> > +       thread_change_ptid (old_ptid, new_ptid);
> > +    }
> > +  do_cleanups (old_chain);
> > +}
> > +
> > +/* Update sleeping tasks by walking the task_structs starting from
> > +   init_task.  */
> > +
> > +static void
> > +lk_update_sleeping_tasks ()
> > +{
> > +  CORE_ADDR init_task, task, thread;
> > +  int inf_pid;
> > +
> > +  inf_pid = current_inferior ()->pid;
> > +  init_task = LK_ADDR (init_task);
> > +
> > +  lk_list_for_each_container (task, init_task, task_struct, tasks)
> > +    {
> > +      lk_list_for_each_container (thread, task, task_struct, thread_group)
> > +       {
> > +         int pid;
> > +         ptid_t ptid;
> > +         struct thread_info *tp;
> > +
> > +         pid = lk_read_int (thread + LK_OFFSET (task_struct, pid));
> > +         ptid = ptid_build (inf_pid, pid, thread);
> > +
> > +         tp = find_thread_ptid (ptid);
> > +         if (tp == NULL || tp->state == THREAD_EXITED)
> > +           add_thread (ptid);
> > +       }
> > +    }
> > +}
> > +
> > +/* Function for targets to_update_thread_list hook.  */
> > +
> > +static void
> > +lk_update_thread_list (struct target_ops *target)
> > +{
> > +  prune_threads ();
> > +  lk_update_running_tasks ();
> > +  lk_update_sleeping_tasks ();
> > +}
> > +
> > +/* Function for targets to_fetch_registers hook.  */
> > +
> > +static void
> > +lk_fetch_registers (struct target_ops *target,
> > +                   struct regcache *regcache, int regnum)
> > +{
> > +  CORE_ADDR task;
> > +  unsigned int cpu;
> > +
> > +  task = (CORE_ADDR) ptid_get_tid (regcache_get_ptid (regcache));
> > +  cpu = lk_task_running (task);
> > +
> > +  /* Let the target beneath fetch registers of running tasks.  */
> > +  if (cpu != LK_CPU_INVAL)
> > +    {
> > +      struct cleanup *old_inferior_ptid;
> > +
> > +      old_inferior_ptid = save_inferior_ptid ();
> > +      inferior_ptid = lk_cpu_to_old_ptid (cpu);
> > +      linux_kernel_ops->beneath->to_fetch_registers (target, regcache,
> > regnum);
> > +      do_cleanups (old_inferior_ptid);
> > +    }
> > +  else
> > +    {
> > +      struct gdbarch *gdbarch;
> > +      unsigned int i;
> > +
> > +      LK_HOOK->get_registers (task, target, regcache, regnum);
> > +
> > +      /* Mark all registers not found as unavailable.  */
> > +      gdbarch = get_regcache_arch (regcache);
> > +      for (i = 0; i < gdbarch_num_regs (gdbarch); i++)
> > +       {
> > +         if (regcache_register_status (regcache, i) == REG_UNKNOWN)
> > +           regcache_raw_supply (regcache, i, NULL);
> > +       }
> > +    }
> > +}
> > +
> > +/* Function for targets to_pid_to_str hook.  Marks running tasks with an
> > +   asterisk "*".  */
> > +
> > +static char *
> > +lk_pid_to_str (struct target_ops *target, ptid_t ptid)
> > +{
> > +  static char buf[64];
> > +  long pid;
> > +  CORE_ADDR task;
> > +
> > +  pid = ptid_get_lwp (ptid);
> > +  task = (CORE_ADDR) ptid_get_tid (ptid);
> > +
> > +  xsnprintf (buf, sizeof (buf), "PID: %5li%s, 0x%s",
> > +            pid, ((lk_task_running (task) != LK_CPU_INVAL) ? "*" : ""),
> > +            phex (task, lk_builtin_type_size (unsigned_long)));
> > +
> > +  return buf;
> > +}
> > +
> > +/* Function for targets to_thread_name hook.  */
> > +
> > +static const char *
> > +lk_thread_name (struct target_ops *target, struct thread_info *ti)
> > +{
> > +  static char buf[LK_TASK_COMM_LEN + 1];
> > +  char tmp[LK_TASK_COMM_LEN + 1];
> > +  CORE_ADDR task, comm;
> > +  size_t size;
> > +
> > +  size = std::min ((unsigned int) LK_TASK_COMM_LEN,
> > +                  LK_ARRAY_LEN(LK_FIELD (task_struct, comm)));
> > +
> > +  task = (CORE_ADDR) ptid_get_tid (ti->ptid);
> > +  comm = task + LK_OFFSET (task_struct, comm);
> > +  read_memory (comm, (gdb_byte *) tmp, size);
> > +
> > +  xsnprintf (buf, sizeof (buf), "%-16s", tmp);
> > +
> > +  return buf;
> > +}
> > +
> > +/* Functions to initialize and free target_ops and its private data.  As
> > well
> > +   as functions for targets to_open/close/detach hooks.  */
> > +
> > +/* Check if OBFFILE is a Linux kernel.  */
> > +
> > +static int
> > +lk_is_linux_kernel (struct objfile *objfile)
> > +{
> > +  int ok = 0;
> > +
> > +  if (objfile == NULL || !(objfile->flags & OBJF_MAINLINE))
> > +    return 0;
> > +
> > +  ok += lookup_minimal_symbol ("linux_banner", NULL, objfile).minsym !=
> > NULL;
> > +  ok += lookup_minimal_symbol ("_stext", NULL, objfile).minsym != NULL;
> > +  ok += lookup_minimal_symbol ("_etext", NULL, objfile).minsym != NULL;
> > +
> > +  return (ok > 2);
> > +}
> > +
> > +/* Initialize struct lk_private.  */
> > +
> > +static void
> > +lk_init_private ()
> > +{
> > +  linux_kernel_ops->to_data = XCNEW (struct lk_private);
> > +  LK_PRIVATE->hooks = XCNEW (struct lk_private_hooks);
> > +  LK_PRIVATE->data = htab_create_alloc (31, (htab_hash)
> > lk_hash_private_data,
> > +                                       (htab_eq) lk_private_data_eq, NULL,
> > +                                       xcalloc, xfree);
> > +}
> > +
> > +/* Initialize architecture independent private data.  Must be called
> > +   _after_ symbol tables were initialized.  */
> > +
> > +static void
> > +lk_init_private_data ()
> > +{
> > +  if (LK_PRIVATE->data != NULL)
> > +    htab_empty (LK_PRIVATE->data);
> > +
> > +  LK_DECLARE_FIELD (task_struct, tasks);
> > +  LK_DECLARE_FIELD (task_struct, pid);
> > +  LK_DECLARE_FIELD (task_struct, tgid);
> > +  LK_DECLARE_FIELD (task_struct, thread_group);
> > +  LK_DECLARE_FIELD (task_struct, comm);
> > +  LK_DECLARE_FIELD (task_struct, thread);
> > +
> > +  LK_DECLARE_FIELD (list_head, next);
> > +  LK_DECLARE_FIELD (list_head, prev);
> > +
> > +  LK_DECLARE_FIELD (rq, curr);
> > +
> > +  LK_DECLARE_FIELD (cpumask, bits);
> > +
> > +  LK_DECLARE_ADDR (init_task);
> > +  LK_DECLARE_ADDR (runqueues);
> > +  LK_DECLARE_ADDR (__per_cpu_offset);
> > +  LK_DECLARE_ADDR (init_mm);
> > +
> > +  LK_DECLARE_ADDR_ALIAS (__cpu_online_mask, cpu_online_mask);  /* linux
> > 4.5+ */
> > +  LK_DECLARE_ADDR_ALIAS (cpu_online_bits, cpu_online_mask);    /* linux
> > -4.4 */
> > +  if (LK_ADDR (cpu_online_mask) == -1)
> > +    error (_("Could not find address cpu_online_mask.  Aborting."));
> > +}
> > +
> > +/* Frees the cpu to old ptid map.  */
> > +
> > +static void
> > +lk_free_ptid_map ()
> > +{
> > +  while (LK_PRIVATE->old_ptid)
> > +    {
> > +      struct lk_ptid_map *tmp;
> > +
> > +      tmp = LK_PRIVATE->old_ptid;
> > +      LK_PRIVATE->old_ptid = tmp->next;
> > +      XDELETE (tmp);
> > +    }
> > +}
> > +
> > +/* Initialize the cpu to old ptid map.  Prefer the arch dependent
> > +   map_running_task_to_cpu hook if provided, else assume that the PID used
> > +   by target beneath is the same as in task_struct PID task_struct.  See
> > +   comment on lk_ptid_map in lk-low.h for details.  */
> > +
> > +static void
> > +lk_init_ptid_map ()
> > +{
> > +  struct thread_info *ti;
> > +  ULONGEST *cpu_online_mask;
> > +  size_t size;
> > +  unsigned int cpu;
> > +  struct cleanup *old_chain;
> > +
> > +  if (LK_PRIVATE->old_ptid != NULL)
> > +    lk_free_ptid_map ();
> > +
> > +  size = LK_BITMAP_SIZE (cpumask);
> > +  cpu_online_mask = lk_read_bitmap (LK_ADDR (cpu_online_mask), size);
> > +  old_chain = make_cleanup (xfree, cpu_online_mask);
> > +
> > +  ALL_THREADS (ti)
> > +    {
> > +      struct lk_ptid_map *ptid_map = XCNEW (struct lk_ptid_map);
> > +      CORE_ADDR rq, curr;
> > +      int pid;
> > +
> > +      /* Give the architecture a chance to overwrite default behaviour.  */
> > +      if (LK_HOOK->map_running_task_to_cpu)
> > +       {
> > +         ptid_map->cpu = LK_HOOK->map_running_task_to_cpu (ti);
> > +       }
> > +      else
> > +       {
> > +         LK_BITMAP_FOR_EACH_SET_BIT (cpu_online_mask, size, cpu)
> > +           {
> > +             rq = LK_ADDR (runqueues) + lk_get_percpu_offset (cpu);
> > +             curr = lk_read_addr (rq + LK_OFFSET (rq, curr));
> > +             pid = lk_read_int (curr + LK_OFFSET (task_struct, pid));
> > +
> > +             if (pid == ptid_get_lwp (ti->ptid))
> > +               {
> > +                 ptid_map->cpu = cpu;
> > +                 break;
> > +               }
> > +           }
> > +         if (cpu == size)
> > +           error (_("Could not map thread with pid %d, lwp %lu to a cpu."),
> > +                  ti->ptid.pid, ti->ptid.lwp);
> > +       }
> > +      ptid_map->old_ptid = ti->ptid;
> > +      ptid_map->next = LK_PRIVATE->old_ptid;
> > +      LK_PRIVATE->old_ptid = ptid_map;
> > +    }
> > +
> > +  do_cleanups (old_chain);
> > +}
> > +
> > +/* Initializes all private data and pushes the linux kernel target, if not
> > +   already done.  */
> > +
> > +static void
> > +lk_try_push_target ()
> > +{
> > +  struct gdbarch *gdbarch;
> > +
> > +  gdbarch = current_inferior ()->gdbarch;
> > +  if (!(gdbarch && gdbarch_lk_init_private_p (gdbarch)))
> > +    error (_("Linux kernel debugging not supported on %s."),
> > +          gdbarch_bfd_arch_info (gdbarch)->printable_name);
> > +
> > +  lk_init_private ();
> > +  lk_init_private_data ();
> > +  gdbarch_lk_init_private (gdbarch);
> > +  /* Check for required arch hooks.  */
> > +  gdb_assert (LK_HOOK->get_registers);
> > +
> > +  lk_init_ptid_map ();
> > +  lk_update_thread_list (linux_kernel_ops);
> > +
> > +  if (!target_is_pushed (linux_kernel_ops))
> > +    push_target (linux_kernel_ops);
> > +}
> > +
> > +/* Function for targets to_open hook.  */
> > +
> > +static void
> > +lk_open (const char *args, int from_tty)
> > +{
> > +  struct objfile *objfile;
> > +
> > +  if (target_is_pushed (linux_kernel_ops))
> > +    {
> > +      printf_unfiltered (_("Linux kernel target already pushed.
> > Aborting\n"));
> > +      return;
> > +    }
> > +
> > +  for (objfile = current_program_space->objfiles; objfile;
> > +       objfile = objfile->next)
> > +    {
> > +      if (lk_is_linux_kernel (objfile)
> > +         && ptid_get_pid (inferior_ptid) != 0)
> > +       {
> > +         lk_try_push_target ();
> > +         return;
> > +       }
> > +    }
> > +  printf_unfiltered (_("Could not find a valid Linux kernel object file.  "
> > +                      "Aborting.\n"));
> > +}
> > +
> > +/* Function for targets to_close hook.  Deletes all private data.  */
> > +
> > +static void
> > +lk_close (struct target_ops *ops)
> > +{
> > +  htab_delete (LK_PRIVATE->data);
> > +  lk_free_ptid_map ();
> > +  XDELETE (LK_PRIVATE->hooks);
> > +
> > +  XDELETE (LK_PRIVATE);
> > +  linux_kernel_ops->to_data = NULL;
> > +}
> > +
> > +/* Function for targets to_detach hook.  */
> > +
> > +static void
> > +lk_detach (struct target_ops *t, const char *args, int from_tty)
> > +{
> > +  struct target_ops *beneath = linux_kernel_ops->beneath;
> > +
> > +  unpush_target (linux_kernel_ops);
> > +  reinit_frame_cache ();
> > +  if (from_tty)
> > +    printf_filtered (_("Linux kernel target detached.\n"));
> > +
> > +  beneath->to_detach (beneath, args, from_tty);
> > +}
> > +
> > +/* Function for new objfile observer.  */
> > +
> > +static void
> > +lk_observer_new_objfile (struct objfile *objfile)
> > +{
> > +  if (lk_is_linux_kernel (objfile)
> > +      && ptid_get_pid (inferior_ptid) != 0)
> > +    lk_try_push_target ();
> > +}
> > +
> > +/* Function for inferior created observer.  */
> > +
> > +static void
> > +lk_observer_inferior_created (struct target_ops *ops, int from_tty)
> > +{
> > +  struct objfile *objfile;
> > +
> > +  if (ptid_get_pid (inferior_ptid) == 0)
> > +    return;
> > +
> > +  for (objfile = current_inferior ()->pspace->objfiles; objfile;
> > +       objfile = objfile->next)
> > +    {
> > +      if (lk_is_linux_kernel (objfile))
> > +       {
> > +         lk_try_push_target ();
> > +         return;
> > +       }
> > +    }
> > +}
> > +
> > +/* Initialize linux kernel target.  */
> > +
> > +static void
> > +init_linux_kernel_ops (void)
> > +{
> > +  struct target_ops *t;
> > +
> > +  if (linux_kernel_ops != NULL)
> > +    return;
> > +
> > +  t = XCNEW (struct target_ops);
> > +  t->to_shortname = "linux-kernel";
> > +  t->to_longname = "linux kernel support";
> > +  t->to_doc = "Adds support to debug the Linux kernel";
> > +
> > +  /* set t->to_data = struct lk_private in lk_init_private.  */
> > +
> > +  t->to_open = lk_open;
> > +  t->to_close = lk_close;
> > +  t->to_detach = lk_detach;
> > +  t->to_fetch_registers = lk_fetch_registers;
> > +  t->to_update_thread_list = lk_update_thread_list;
> > +  t->to_pid_to_str = lk_pid_to_str;
> > +  t->to_thread_name = lk_thread_name;
> > +
> > +  t->to_stratum = thread_stratum;
> > +  t->to_magic = OPS_MAGIC;
> > +
> > +  linux_kernel_ops = t;
> > +
> > +  add_target (t);
> > +}
> > +
> > +/* Provide a prototype to silence -Wmissing-prototypes.  */
> > +extern initialize_file_ftype _initialize_linux_kernel;
> > +
> > +void
> > +_initialize_linux_kernel (void)
> > +{
> > +  init_linux_kernel_ops ();
> > +
> > +  observer_attach_new_objfile (lk_observer_new_objfile);
> > +  observer_attach_inferior_created (lk_observer_inferior_created);
> > +}
> > diff --git a/gdb/lk-low.h b/gdb/lk-low.h
> > new file mode 100644
> > index 0000000..292ef97
> > --- /dev/null
> > +++ b/gdb/lk-low.h
> > @@ -0,0 +1,310 @@
> > +/* Basic Linux kernel support, architecture independent.
> > +
> > +   Copyright (C) 2016 Free Software Foundation, Inc.
> > +
> > +   This file is part of GDB.
> > +
> > +   This program is free software; you can redistribute it and/or modify
> > +   it under the terms of the GNU General Public License as published by
> > +   the Free Software Foundation; either version 3 of the License, or
> > +   (at your option) any later version.
> > +
> > +   This program is distributed in the hope that it will be useful,
> > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > +   GNU General Public License for more details.
> > +
> > +   You should have received a copy of the GNU General Public License
> > +   along with this program.  If not, see <http://www.gnu.org/licenses/>.
> > */ +
> > +#ifndef __LK_LOW_H__
> > +#define __LK_LOW_H__
> > +
> > +#include "target.h"
> > +
> > +extern struct target_ops *linux_kernel_ops;
> > +
> > +/* Copy constants defined in Linux kernel.  */
> > +#define LK_TASK_COMM_LEN 16
> > +#define LK_BITS_PER_BYTE 8
> > +
> > +/* Definitions used in linux kernel target.  */
> > +#define LK_CPU_INVAL -1U
> > +
> > +/* Private data structs for this target.  */
> > +/* Forward declarations.  */
> > +struct lk_private_hooks;
> > +struct lk_ptid_map;
> > +
> > +/* Short hand access to private data.  */
> > +#define LK_PRIVATE ((struct lk_private *) linux_kernel_ops->to_data)
> > +#define LK_HOOK (LK_PRIVATE->hooks)
> > +
> > +struct lk_private
> > +{
> > +  /* Hashtab for needed addresses, structs and fields.  */
> > +  htab_t data;
> > +
> > +  /* Linked list to map between cpu number and original ptid from target
> > +     beneath.  */
> > +  struct lk_ptid_map *old_ptid;
> > +
> > +  /* Hooks for architecture dependent functions.  */
> > +  struct lk_private_hooks *hooks;
> > +};
> > +
> > +/* We use the following convention for PTIDs:
> > +
> > +   ptid->pid = inferiors PID
> > +   ptid->lwp = PID from task_stuct
> > +   ptid->tid = address of task_struct
> > +
> > +   The task_structs address as TID has two reasons.  First, we need it
> > quite
> > +   often and there is no other reasonable way to pass it down.  Second, it
> > +   helps us to distinguish swapper tasks as they all have PID = 0.
> > +
> > +   Furthermore we cannot rely on the target beneath to use the same PID as
> > the
> > +   task_struct. Thus we need a mapping between our PTID and the PTID of the
> > +   target beneath. Otherwise it is impossible to pass jobs, e.g. fetching
> > +   registers of running tasks, to the target beneath.  */
> > +
> > +/* Private data struct to map between our and the target beneath PTID.  */
> > +
> > +struct lk_ptid_map
> > +{
> > +  struct lk_ptid_map *next;
> > +  unsigned int cpu;
> > +  ptid_t old_ptid;
> > +};
> > +
> > +/* Private data struct to be stored in hashtab.  */
> > +
> > +struct lk_private_data
> > +{
> > +  const char *alias;
> > +
> > +  union
> > +  {
> > +    CORE_ADDR addr;
> > +    struct type *type;
> > +    struct field *field;
> > +  } data;
> > +};
> > +
> > +/* Wrapper for htab_hash_string to work with our private data.  */
> > +
> > +static inline hashval_t
> > +lk_hash_private_data (const struct lk_private_data *entry)
> > +{
> > +  return htab_hash_string (entry->alias);
> > +}
> > +
> > +/* Function for htab_eq to work with our private data.  */
> > +
> > +static inline int
> > +lk_private_data_eq (const struct lk_private_data *entry,
> > +                   const struct lk_private_data *element)
> > +{
> > +  return streq (entry->alias, element->alias);
> > +}
> > +
> > +/* Wrapper for htab_find_slot to work with our private data.  Do not use
> > +   directly, use the macros below instead.  */
> > +
> > +static inline void **
> > +lk_find_slot (const char *alias)
> > +{
> > +  const struct lk_private_data dummy = { alias };
> > +  return htab_find_slot (LK_PRIVATE->data, &dummy, INSERT);
> > +}
> > +
> > +/* Wrapper for htab_find to work with our private data.  Do not use
> > +   directly, use the macros below instead.  */
> > +
> > +static inline struct lk_private_data *
> > +lk_find (const char *alias)
> > +{
> > +  const struct lk_private_data dummy = { alias };
> > +  return (struct lk_private_data *) htab_find (LK_PRIVATE->data, &dummy);
> > +}
> > +
> > +/* Functions to initialize private data.  Do not use directly, use the
> > +   macros below instead.  */
> > +
> > +extern struct lk_private_data *lk_init_addr (const char *name,
> > +                                            const char *alias, int silent);
> > +extern struct lk_private_data *lk_init_struct (const char *name,
> > +                                              const char *alias, int
> > silent); +extern struct lk_private_data *lk_init_field (const char *s_name,
> > +                                             const char *f_name,
> > +                                             const char *s_alias,
> > +                                             const char *f_alias, int
> > silent); +
> > +/* The names we use to store our private data in the hashtab.  */
> > +
> > +#define LK_STRUCT_ALIAS(s_name) ("struct " #s_name)
> > +#define LK_FIELD_ALIAS(s_name, f_name) (#s_name " " #f_name)
> > +
> > +/* Macros to initiate addresses and fields, where (S_/F_)NAME is the
> > variables
> > +   name as used in Linux.  LK_DECLARE_FIELD also initializes the
> > corresponding
> > +   struct entry.  Throws an error, if no symbol with the given name is
> > found.
> > + */
> > +
> > +#define LK_DECLARE_ADDR(name) \
> > +  lk_init_addr (#name, #name, 0)
> > +#define LK_DECLARE_FIELD(s_name, f_name) \
> > +  lk_init_field (#s_name, #f_name, LK_STRUCT_ALIAS (s_name), \
> > +                LK_FIELD_ALIAS (s_name, f_name), 0)
> > +
> > +/* Same as LK_DECLARE_*, but returns NULL instead of throwing an error if
> > no
> > +   symbol was found.  The caller is responsible to check for possible
> > errors.
> > + */
> > +
> > +#define LK_DECLARE_ADDR_SILENT(name) \
> > +  lk_init_addr (#name, #name, 1)
> > +#define LK_DECLARE_FIELD_SILENT(s_name, f_name) \
> > +  lk_init_field (#s_name, #f_name, LK_STRUCT_ALIAS (s_name), \
> > +                LK_FIELD_ALIAS (s_name, f_name), 1)
> > +
> > +/* Same as LK_DECLARE_*_SILENT, but allows you to give an ALIAS name.  If
> > used
> > +   for a struct, the struct has to be declared explicitly _before_ any of
> > its
> > +   fields.  They are ment to be used, when a variable in the kernel was
> > simply
> > +   renamed (at least from our point of view).  The caller is responsible to
> > +   check for possible errors.  */
> > +
> > +#define LK_DECLARE_ADDR_ALIAS(name, alias) \
> > +  lk_init_addr (#name, #alias, 1)
> > +#define LK_DECLARE_STRUCT_ALIAS(s_name, alias) \
> > +  lk_init_struct (#s_name, LK_STRUCT_ALIAS (alias), 1)
> > +#define LK_DECLARE_FIELD_ALIAS(s_alias, f_name, f_alias) \
> > +  lk_init_field (NULL, #f_name, LK_STRUCT_ALIAS (s_alias), \
> > +                LK_FIELD_ALIAS (s_alias, f_alias), 1)
> > +
> > +/* Macros to retrieve private data from hashtab. Returns NULL (-1) if no
> > entry
> > +   with the given ALIAS exists. The caller only needs to check for possible
> > +   errors if not done so at initialization.  */
> > +
> > +#define LK_ADDR(alias) \
> > +  (lk_find (#alias) ? (lk_find (#alias))->data.addr : -1)
> > +#define LK_STRUCT(alias) \
> > +  (lk_find (LK_STRUCT_ALIAS (alias)) \
> > +   ? (lk_find (LK_STRUCT_ALIAS (alias)))->data.type \
> > +   : NULL)
> > +#define LK_FIELD(s_alias, f_alias) \
> > +  (lk_find (LK_FIELD_ALIAS (s_alias, f_alias)) \
> > +   ? (lk_find (LK_FIELD_ALIAS (s_alias, f_alias)))->data.field \
> > +   : NULL)
> > +
> > +
> > +/* Definitions for architecture dependent hooks.  */
> > +/* Hook to read registers from the target and supply their content
> > +   to the regcache.  */
> > +typedef void (*lk_hook_get_registers) (CORE_ADDR task,
> > +                                      struct target_ops *target,
> > +                                      struct regcache *regcache,
> > +                                      int regnum);
> > +
> > +/* Hook to return the per_cpu_offset of cpu CPU.  Only architectures that
> > +   do not use the __per_cpu_offset array to determin the offset have to
> > +   supply this hook.  */
> > +typedef CORE_ADDR (*lk_hook_get_percpu_offset) (unsigned int cpu);
> > +
> > +/* Hook to map a running task to a logical CPU.  Required if the target
> > +   beneath uses a different PID as struct rq.  */
> > +typedef unsigned int (*lk_hook_map_running_task_to_cpu) (struct
> > thread_info *ti); +
> > +struct lk_private_hooks
> > +{
> > +  /* required */
> > +  lk_hook_get_registers get_registers;
> > +
> > +  /* optional, required if __per_cpu_offset array is not used to determine
> > +     offset.  */
> > +  lk_hook_get_percpu_offset get_percpu_offset;
> > +
> > +  /* optional, required if the target beneath uses a different PID as
> > struct
> > +     rq.  */
> > +  lk_hook_map_running_task_to_cpu map_running_task_to_cpu;
> > +};
> > +
> > +/* Helper functions to read and return a value at a given ADDRess.  */
> > +extern int lk_read_int (CORE_ADDR addr);
> > +extern unsigned int lk_read_uint (CORE_ADDR addr);
> > +extern LONGEST lk_read_long (CORE_ADDR addr);
> > +extern ULONGEST lk_read_ulong (CORE_ADDR addr);
> > +extern CORE_ADDR lk_read_addr (CORE_ADDR addr);
> > +
> > +/* Reads a bitmap at a given ADDRess of size SIZE (in bits). Allocates and
> > +   returns an array of ulongs.  The caller is responsible to free the array
> > +   after it is no longer needed.  */
> > +extern ULONGEST *lk_read_bitmap (CORE_ADDR addr, size_t size);
> > +
> > +/* Walks the bitmap BITMAP of size SIZE from bit (index) BIT.
> > +   Returns the index of the next set bit or SIZE, when the end of the
> > bitmap
> > +   was reached.  To iterate over all set bits use macro
> > +   LK_BITMAP_FOR_EACH_SET_BIT defined below.  */
> > +extern size_t lk_bitmap_find_next_bit (ULONGEST *bitmap, size_t bit,
> > +                                      size_t size);
> > +#define LK_BITMAP_FOR_EACH_SET_BIT(bitmap, size, bit)                  \
> > +  for ((bit) = lk_bitmap_find_next_bit ((bitmap), (size), 0);          \
> > +       (bit) < (size);                                                 \
> > +       (bit) = lk_bitmap_find_next_bit ((bitmap), (size), (bit) + 1))
> > +
> > +/* Returns the size of BITMAP in bits.  */
> > +#define LK_BITMAP_SIZE(bitmap) \
> > +  (FIELD_SIZE (LK_FIELD (bitmap, bits)) * LK_BITS_PER_BYTE)
> > +
> > +/* Returns the Hamming weight, i.e. number of set bits, of bitmap BITMAP
> > with
> > +   size SIZE (in bits).  */
> > +extern size_t lk_bitmap_hweight (ULONGEST *bitmap, size_t size);
> > +
> > +
> > +/* Short hand access to current gdbarchs builtin types and their
> > +   size (in byte).  For TYPE replace spaces " " by underscore "_", e.g.
> > +   "unsigned int" => "unsigned_int".  */
> > +#define lk_builtin_type(type)                                  \
> > +  (builtin_type (current_inferior ()->gdbarch)->builtin_##type)
> > +#define lk_builtin_type_size(type)             \
> > +  (lk_builtin_type (type)->length)
> > +
> > +/* If field FIELD is an array returns its length (in #elements).  */
> > +#define LK_ARRAY_LEN(field)                    \
> > +  (FIELD_SIZE (field) / FIELD_TARGET_SIZE (field))
> > +
> > +/* Short hand access to the offset of field F_NAME in struct S_NAME.  */
> > +#define LK_OFFSET(s_name, f_name)              \
> > +  (FIELD_OFFSET (LK_FIELD (s_name, f_name)))
> > +
> > +/* Returns the container of field FNAME of struct SNAME located at address
> > +   ADDR.  */
> > +#define LK_CONTAINER_OF(addr, sname, fname)            \
> > +  ((addr) - LK_OFFSET (sname, fname))
> > +
> > +/* Divides numinator N by demoniator D and rounds up the result.  */
> > +#define LK_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
> > +
> > +
> > +/* Additional access macros to fields in the style of gdbtypes.h */
> > +/* Returns the size of field FIELD (in bytes). If FIELD is an array returns
> > +   the size of the whole array.  */
> > +#define FIELD_SIZE(field)                      \
> > +  TYPE_LENGTH (check_typedef (FIELD_TYPE (*field)))
> > +
> > +/* Returns the size of the target type of field FIELD (in bytes).  If
> > FIELD is
> > +   an array returns the size of its elements.  */
> > +#define FIELD_TARGET_SIZE(field)               \
> > +  TYPE_LENGTH (check_typedef (TYPE_TARGET_TYPE (FIELD_TYPE (*field))))
> > +
> > +/* Returns the offset of field FIELD (in bytes).  */
> > +#define FIELD_OFFSET(field)                    \
> > +  (FIELD_BITPOS (*field) / TARGET_CHAR_BIT)
> > +
> > +/* Provides the per_cpu_offset of cpu CPU.  If the architecture
> > +   provides a get_percpu_offset hook, the call is passed to it.  Otherwise
> > +   returns the __per_cpu_offset[CPU] element.  */
> > +extern CORE_ADDR lk_get_percpu_offset (unsigned int cpu);
> > +
> > +/* Tests if a given task TASK is running. Returns either the cpu-id
> > +   if running or LK_CPU_INVAL if not.  */
> > +extern unsigned int lk_task_running (CORE_ADDR task);
> > +#endif /* __LK_LOW_H__ */
> > --
> > 2.8.4
> >  
>
  
Philipp Rudo May 3, 2017, 3:19 p.m. UTC | #8
Hi Omair,

and now the third mail.

On Wed, 3 May 2017 19:12:36 +0500
Omair Javaid <omair.javaid@linaro.org> wrote:

> On 24 April 2017 at 20:24, Andreas Arnez <arnez@linux.vnet.ibm.com> wrote:
> > On Thu, Apr 20 2017, Omair Javaid wrote:
> >  
> >> Hi Philipp and Andreas,
> >>
> >> I have some further comments on this patch specifically about copying
> >> task_struct->pid into ptid->lwp and using task_struct address as tid.
> >>
> >> I see that we are overriding lwp, tid which any target beneath might
> >> be using differently.
> >>
> >> So suggestion about storing task_struct->pid or task_struct address is
> >> to use private_thread_info in binutils-gdb/gdb/gdbthread.h for this
> >> information.  
> >
> > The current version of the patch series is mainly focused on dump
> > targets.  Remote targets require some additional changes.  We've
> > discussed the use of private_thread_info before, and the last I've seen
> > is that it is not suitable either, because remote.c uses it already:
> >
> >   https://sourceware.org/ml/gdb-patches/2017-02/msg00543.html
> >
> > In my view, the private_thread_info field really is a hack, and we are
> > now facing its limits.  It provides some space for a single thread layer
> > to store information into, but not for multiple such layers.  In the
> > case of the Linux kernel we at least have two different thread layers:
> > the CPU layer (each "thread" is a CPU), and the kernel task layer on top
> > of that.
> >
> > I think we need to allow a target to maintain its *own* thread list.
> > The CPU "thread list" would be maintained by the target beneath
> > (remote/dump), and the kernel task list would be maintained by the LK
> > target.  The ptid namespaces could be completely separate.  
> 
> Hi Philip and Andreas,
> 
> Further more on this topic, remote stub assigns a common pid to all
> CPU threads and uses LWP as CPU number.
> While tid is left zero. I think we ll have to rework the old to new
> ptid mapping mechanism a little bit in order to adjust all types of
> targets beneath.

As mentioned in the mail before, the mapping between old and new ptid is a
hack.  Feel free to change it.  Although I think that the only proper solution
is to allow every target to manage its own thread_list.

> In your implementation of lk_init_ptid_map() you are testing pid from
> task_struct against lwp of set by target target beneath. In case of
> remote this will never be equal to pid as it is marked as the cpu
> number.

For dumps it depends on what was written in the dump.  Some architectures
write there the pid some the cpu (e.g. s390).  Thats why I made it possible
for the architecture to overwrite the default behavior.

I think the cleanest solution is to classify lk_private and make the hooks
virtual class methods (as Yao suggested).  What do you think?
 
> Also in your implementation of lk_update_running_tasks lwp is being
> updated with pid read from task_struct and tid is the task struct
> address.
> We are doing this not only for linux thread layer tasks but also for
> CPU tasks in lk_update_running_tasks. This causes some sync issues
> while on live targets as every time we halt a new thread is reported
> by remote.
> 
> I think linux thread layer should only update tasks it has created
> itself i-e tasks created by parsing task_struct. We can devise a
> mechanism to map CPU tasks to curr task_struct and leave CPU tasks as
> they were created by target beneath with same ptids.
> 
> Whats your take on this?

I'm against this.  When we only update the threads we created ourself there
will be multiple problems.  For example you will see running tasks twice with
"info thread", once its task_struct version and once its remote version.
Furthermore there will be no mapping between both versions.  Thus the
task_struct version will show the outdated state when the task was last
"unscheduled".  While the remote version shows the tasks actual state but has
no information about kernel internals, e.g. the linux pid.  This would be
extremely confusing!

In addition there is a chance that you have multiple threads with the same
ptid, e.g. cpu1 colliding with the init process with pid = 1.  GDB is not able
to handle two threads with the same ptid.  This could (most likely) be
prevented by using the task_struct address as tid.  But there is nothing
preventing the target beneath to do the same.

Long story short.  GDBs current thread implementation is a hack with
limitations we are now reaching.  The only clean solution is for every target
to have its own thread_list.

Philipp

> >> I also have reservation about use of old_ptid naming in struct
> >> lk_private and struct lk_ptid_map.
> >>
> >> old_ptid naming is a little confusing kindly choose a distinguishable
> >> name for old_ptid varibles in both lk_private and lk_ptid_map.
> >>
> >> Further Here's an implementation of bitmap_weight function from linux
> >> kernel. Kindly see if your implementation can be improved and moved to
> >> a generic area in gdb.
> >>
> >>  10 int __bitmap_weight(const unsigned long *bitmap, int bits)
> >>  11 {
> >>  12         int k, w = 0, lim = bits/BITS_PER_LONG;
> >>  13
> >>  14         for (k = 0; k < lim; k++)
> >>  15                 w += hweight_long(bitmap[k]);
> >>  16
> >>  17         if (bits % BITS_PER_LONG)
> >>  18                 w += hweight_long(bitmap[k] &
> >> BITMAP_LAST_WORD_MASK(bits)); 19
> >>  20         return w;
> >>  21 }  
> >
> > The __bitmap_weight function is specific to Linux, so I'm not sure we
> > want to move it to a generic area.  For big-endian targets the function
> > depends on the width of Linux' "unsigned long" type, because
> > BITMAP_LAST_WORD_MASK builds a mask for the *least significant* bits
> > instead of the *lowest-addressed* ones.
> >
> > It's probably true that the performance of lk_bitmap_hweight could be
> > improved.  For instance, with some care a function like popcount_hwi()
> > in GCC's hwint.h could be exploited, even if the target's word width and
> > byte order may not match the GDB client's.  This would not make the
> > function simpler, though.
> >
> > --
> > Andreas
> >  
>
  
Philipp Rudo May 3, 2017, 3:36 p.m. UTC | #9
Hi Yao,


On Tue, 02 May 2017 12:14:40 +0100
Yao Qi <qiyaoltc@gmail.com> wrote:

> Philipp Rudo <prudo@linux.vnet.ibm.com> writes:
> 
> Hi Philipp,
> 
> > +/* Initialize architecture independent private data.  Must be called
> > +   _after_ symbol tables were initialized.  */
> > +
> > +static void
> > +lk_init_private_data ()
> > +{
> > +  if (LK_PRIVATE->data != NULL)
> > +    htab_empty (LK_PRIVATE->data);
> > +
> > +  LK_DECLARE_FIELD (task_struct, tasks);
> > +  LK_DECLARE_FIELD (task_struct, pid);
> > +  LK_DECLARE_FIELD (task_struct, tgid);
> > +  LK_DECLARE_FIELD (task_struct, thread_group);
> > +  LK_DECLARE_FIELD (task_struct, comm);
> > +  LK_DECLARE_FIELD (task_struct, thread);
> > +
> > +  LK_DECLARE_FIELD (list_head, next);
> > +  LK_DECLARE_FIELD (list_head, prev);
> > +
> > +  LK_DECLARE_FIELD (rq, curr);
> > +
> > +  LK_DECLARE_FIELD (cpumask, bits);
> > +
> > +  LK_DECLARE_ADDR (init_task);
> > +  LK_DECLARE_ADDR (runqueues);
> > +  LK_DECLARE_ADDR (__per_cpu_offset);
> > +  LK_DECLARE_ADDR (init_mm);
> > +
> > +  LK_DECLARE_ADDR_ALIAS (__cpu_online_mask, cpu_online_mask);	/*
> > linux 4.5+ */
> > +  LK_DECLARE_ADDR_ALIAS (cpu_online_bits, cpu_online_mask);	/*
> > linux -4.4 */
> > +  if (LK_ADDR (cpu_online_mask) == -1)
> > +    error (_("Could not find address cpu_online_mask.  Aborting."));
> > +}
> > +  
> 
> > +
> > +/* Initialize linux kernel target.  */
> > +
> > +static void
> > +init_linux_kernel_ops (void)
> > +{
> > +  struct target_ops *t;
> > +
> > +  if (linux_kernel_ops != NULL)
> > +    return;
> > +
> > +  t = XCNEW (struct target_ops);
> > +  t->to_shortname = "linux-kernel";
> > +  t->to_longname = "linux kernel support";
> > +  t->to_doc = "Adds support to debug the Linux kernel";
> > +
> > +  /* set t->to_data = struct lk_private in lk_init_private.  */
> > +
> > +  t->to_open = lk_open;
> > +  t->to_close = lk_close;
> > +  t->to_detach = lk_detach;
> > +  t->to_fetch_registers = lk_fetch_registers;
> > +  t->to_update_thread_list = lk_update_thread_list;
> > +  t->to_pid_to_str = lk_pid_to_str;
> > +  t->to_thread_name = lk_thread_name;
> > +
> > +  t->to_stratum = thread_stratum;
> > +  t->to_magic = OPS_MAGIC;
> > +
> > +  linux_kernel_ops = t;
> > +
> > +  add_target (t);
> > +}
> > +
> > +/* Provide a prototype to silence -Wmissing-prototypes.  */
> > +extern initialize_file_ftype _initialize_linux_kernel;
> > +
> > +void
> > +_initialize_linux_kernel (void)
> > +{
> > +  init_linux_kernel_ops ();
> > +
> > +  observer_attach_new_objfile (lk_observer_new_objfile);
> > +  observer_attach_inferior_created (lk_observer_inferior_created);
> > +}
> > diff --git a/gdb/lk-low.h b/gdb/lk-low.h
> > new file mode 100644
> > index 0000000..292ef97
> > --- /dev/null
> > +++ b/gdb/lk-low.h
> > @@ -0,0 +1,310 @@
> > +/* Basic Linux kernel support, architecture independent.
> > +
> > +   Copyright (C) 2016 Free Software Foundation, Inc.
> > +
> > +   This file is part of GDB.
> > +
> > +   This program is free software; you can redistribute it and/or modify
> > +   it under the terms of the GNU General Public License as published by
> > +   the Free Software Foundation; either version 3 of the License, or
> > +   (at your option) any later version.
> > +
> > +   This program is distributed in the hope that it will be useful,
> > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > +   GNU General Public License for more details.
> > +
> > +   You should have received a copy of the GNU General Public License
> > +   along with this program.  If not, see <http://www.gnu.org/licenses/>.
> > */ +
> > +#ifndef __LK_LOW_H__
> > +#define __LK_LOW_H__
> > +
> > +#include "target.h"
> > +
> > +extern struct target_ops *linux_kernel_ops;
> > +
> > +/* Copy constants defined in Linux kernel.  */
> > +#define LK_TASK_COMM_LEN 16
> > +#define LK_BITS_PER_BYTE 8
> > +
> > +/* Definitions used in linux kernel target.  */
> > +#define LK_CPU_INVAL -1U
> > +
> > +/* Private data structs for this target.  */
> > +/* Forward declarations.  */
> > +struct lk_private_hooks;
> > +struct lk_ptid_map;
> > +
> > +/* Short hand access to private data.  */
> > +#define LK_PRIVATE ((struct lk_private *) linux_kernel_ops->to_data)
> > +#define LK_HOOK (LK_PRIVATE->hooks)
> > +
> > +struct lk_private  
> 
> "private" here is a little confusing.  How about rename it to "linux_kernel"?

I called it "private" as it is the targets private data stored in its to_data
hook.  But I don't mind renaming it.  Especially ...
 
> > +{
> > +  /* Hashtab for needed addresses, structs and fields.  */
> > +  htab_t data;
> > +
> > +  /* Linked list to map between cpu number and original ptid from target
> > +     beneath.  */
> > +  struct lk_ptid_map *old_ptid;
> > +
> > +  /* Hooks for architecture dependent functions.  */
> > +  struct lk_private_hooks *hooks;
> > +};
> > +  
> 
> Secondly, can we change it to a class and function pointers in
> lk_private_hooks become virtual functions.  gdbarch_lk_init_private
> returns a pointer to an instance of sub-class of "linux_kernel".
> 
> lk_init_private_data can be put the constructor of base class, to add
> entries to "data", and sub-class (in each gdbarch) can add their own
> specific stuff.

... when classifying the struct, which already is on my long ToDo-list.  This
struct is a left over from when I started working on the project shortly before
gdb-7.12 was released.  I didn't think that the C++-yfication would kick off
that fast and started with plain C ...

Thanks
Philipp

> > +
> > +/* Functions to initialize private data.  Do not use directly, use the
> > +   macros below instead.  */
> > +
> > +extern struct lk_private_data *lk_init_addr (const char *name,
> > +					     const char *alias, int
> > silent); +extern struct lk_private_data *lk_init_struct (const char *name,
> > +					       const char *alias, int
> > silent);  
> 
> > +
> > +/* Definitions for architecture dependent hooks.  */
> > +/* Hook to read registers from the target and supply their content
> > +   to the regcache.  */
> > +typedef void (*lk_hook_get_registers) (CORE_ADDR task,
> > +				       struct target_ops *target,
> > +				       struct regcache *regcache,
> > +				       int regnum);
> > +
> > +/* Hook to return the per_cpu_offset of cpu CPU.  Only architectures that
> > +   do not use the __per_cpu_offset array to determin the offset have to
> > +   supply this hook.  */
> > +typedef CORE_ADDR (*lk_hook_get_percpu_offset) (unsigned int cpu);
> > +
> > +/* Hook to map a running task to a logical CPU.  Required if the target
> > +   beneath uses a different PID as struct rq.  */
> > +typedef unsigned int (*lk_hook_map_running_task_to_cpu) (struct
> > thread_info *ti); +
> > +struct lk_private_hooks
> > +{
> > +  /* required */
> > +  lk_hook_get_registers get_registers;
> > +
> > +  /* optional, required if __per_cpu_offset array is not used to determine
> > +     offset.  */
> > +  lk_hook_get_percpu_offset get_percpu_offset;
> > +
> > +  /* optional, required if the target beneath uses a different PID as
> > struct
> > +     rq.  */
> > +  lk_hook_map_running_task_to_cpu map_running_task_to_cpu;
> > +};  
>
  
Omair Javaid May 7, 2017, 11:54 p.m. UTC | #10
Hi Phillip,

Thanks for writing back. I hope you are feeling better now.

I am trying to manage our basic live thread implementation within the
limits you have set out in your patches.

However I am interested in knowing what are your plans for immediate
future like next couple of weeks.

If you are not planning on making any particular design changes to the
current version of your patches then probably I will continue working
using your patches as base.

Otherwise if you plan to make any further changes like going for a
separate thread list implementation for all layers of targets then i
can also divert away from your patches for a while untill next update
is posted.

I am already diverting away from Peter's original implementation
because of some basic limitations pointed out during previous reviews.
I dont have reliable solution right now but trying to find one lets
see if i can manage to upgrade this current hack for live threads as
well.

--
Omair.

On 3 May 2017 at 20:36, Philipp Rudo <prudo@linux.vnet.ibm.com> wrote:r
> Hi Yao,
>
>
> On Tue, 02 May 2017 12:14:40 +0100
> Yao Qi <qiyaoltc@gmail.com> wrote:
>
>> Philipp Rudo <prudo@linux.vnet.ibm.com> writes:
>>
>> Hi Philipp,
>>
>> > +/* Initialize architecture independent private data.  Must be called
>> > +   _after_ symbol tables were initialized.  */
>> > +
>> > +static void
>> > +lk_init_private_data ()
>> > +{
>> > +  if (LK_PRIVATE->data != NULL)
>> > +    htab_empty (LK_PRIVATE->data);
>> > +
>> > +  LK_DECLARE_FIELD (task_struct, tasks);
>> > +  LK_DECLARE_FIELD (task_struct, pid);
>> > +  LK_DECLARE_FIELD (task_struct, tgid);
>> > +  LK_DECLARE_FIELD (task_struct, thread_group);
>> > +  LK_DECLARE_FIELD (task_struct, comm);
>> > +  LK_DECLARE_FIELD (task_struct, thread);
>> > +
>> > +  LK_DECLARE_FIELD (list_head, next);
>> > +  LK_DECLARE_FIELD (list_head, prev);
>> > +
>> > +  LK_DECLARE_FIELD (rq, curr);
>> > +
>> > +  LK_DECLARE_FIELD (cpumask, bits);
>> > +
>> > +  LK_DECLARE_ADDR (init_task);
>> > +  LK_DECLARE_ADDR (runqueues);
>> > +  LK_DECLARE_ADDR (__per_cpu_offset);
>> > +  LK_DECLARE_ADDR (init_mm);
>> > +
>> > +  LK_DECLARE_ADDR_ALIAS (__cpu_online_mask, cpu_online_mask);      /*
>> > linux 4.5+ */
>> > +  LK_DECLARE_ADDR_ALIAS (cpu_online_bits, cpu_online_mask);        /*
>> > linux -4.4 */
>> > +  if (LK_ADDR (cpu_online_mask) == -1)
>> > +    error (_("Could not find address cpu_online_mask.  Aborting."));
>> > +}
>> > +
>>
>> > +
>> > +/* Initialize linux kernel target.  */
>> > +
>> > +static void
>> > +init_linux_kernel_ops (void)
>> > +{
>> > +  struct target_ops *t;
>> > +
>> > +  if (linux_kernel_ops != NULL)
>> > +    return;
>> > +
>> > +  t = XCNEW (struct target_ops);
>> > +  t->to_shortname = "linux-kernel";
>> > +  t->to_longname = "linux kernel support";
>> > +  t->to_doc = "Adds support to debug the Linux kernel";
>> > +
>> > +  /* set t->to_data = struct lk_private in lk_init_private.  */
>> > +
>> > +  t->to_open = lk_open;
>> > +  t->to_close = lk_close;
>> > +  t->to_detach = lk_detach;
>> > +  t->to_fetch_registers = lk_fetch_registers;
>> > +  t->to_update_thread_list = lk_update_thread_list;
>> > +  t->to_pid_to_str = lk_pid_to_str;
>> > +  t->to_thread_name = lk_thread_name;
>> > +
>> > +  t->to_stratum = thread_stratum;
>> > +  t->to_magic = OPS_MAGIC;
>> > +
>> > +  linux_kernel_ops = t;
>> > +
>> > +  add_target (t);
>> > +}
>> > +
>> > +/* Provide a prototype to silence -Wmissing-prototypes.  */
>> > +extern initialize_file_ftype _initialize_linux_kernel;
>> > +
>> > +void
>> > +_initialize_linux_kernel (void)
>> > +{
>> > +  init_linux_kernel_ops ();
>> > +
>> > +  observer_attach_new_objfile (lk_observer_new_objfile);
>> > +  observer_attach_inferior_created (lk_observer_inferior_created);
>> > +}
>> > diff --git a/gdb/lk-low.h b/gdb/lk-low.h
>> > new file mode 100644
>> > index 0000000..292ef97
>> > --- /dev/null
>> > +++ b/gdb/lk-low.h
>> > @@ -0,0 +1,310 @@
>> > +/* Basic Linux kernel support, architecture independent.
>> > +
>> > +   Copyright (C) 2016 Free Software Foundation, Inc.
>> > +
>> > +   This file is part of GDB.
>> > +
>> > +   This program is free software; you can redistribute it and/or modify
>> > +   it under the terms of the GNU General Public License as published by
>> > +   the Free Software Foundation; either version 3 of the License, or
>> > +   (at your option) any later version.
>> > +
>> > +   This program is distributed in the hope that it will be useful,
>> > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
>> > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
>> > +   GNU General Public License for more details.
>> > +
>> > +   You should have received a copy of the GNU General Public License
>> > +   along with this program.  If not, see <http://www.gnu.org/licenses/>.
>> > */ +
>> > +#ifndef __LK_LOW_H__
>> > +#define __LK_LOW_H__
>> > +
>> > +#include "target.h"
>> > +
>> > +extern struct target_ops *linux_kernel_ops;
>> > +
>> > +/* Copy constants defined in Linux kernel.  */
>> > +#define LK_TASK_COMM_LEN 16
>> > +#define LK_BITS_PER_BYTE 8
>> > +
>> > +/* Definitions used in linux kernel target.  */
>> > +#define LK_CPU_INVAL -1U
>> > +
>> > +/* Private data structs for this target.  */
>> > +/* Forward declarations.  */
>> > +struct lk_private_hooks;
>> > +struct lk_ptid_map;
>> > +
>> > +/* Short hand access to private data.  */
>> > +#define LK_PRIVATE ((struct lk_private *) linux_kernel_ops->to_data)
>> > +#define LK_HOOK (LK_PRIVATE->hooks)
>> > +
>> > +struct lk_private
>>
>> "private" here is a little confusing.  How about rename it to "linux_kernel"?
>
> I called it "private" as it is the targets private data stored in its to_data
> hook.  But I don't mind renaming it.  Especially ...
>
>> > +{
>> > +  /* Hashtab for needed addresses, structs and fields.  */
>> > +  htab_t data;
>> > +
>> > +  /* Linked list to map between cpu number and original ptid from target
>> > +     beneath.  */
>> > +  struct lk_ptid_map *old_ptid;
>> > +
>> > +  /* Hooks for architecture dependent functions.  */
>> > +  struct lk_private_hooks *hooks;
>> > +};
>> > +
>>
>> Secondly, can we change it to a class and function pointers in
>> lk_private_hooks become virtual functions.  gdbarch_lk_init_private
>> returns a pointer to an instance of sub-class of "linux_kernel".
>>
>> lk_init_private_data can be put the constructor of base class, to add
>> entries to "data", and sub-class (in each gdbarch) can add their own
>> specific stuff.
>
> ... when classifying the struct, which already is on my long ToDo-list.  This
> struct is a left over from when I started working on the project shortly before
> gdb-7.12 was released.  I didn't think that the C++-yfication would kick off
> that fast and started with plain C ...
>
> Thanks
> Philipp
>
>> > +
>> > +/* Functions to initialize private data.  Do not use directly, use the
>> > +   macros below instead.  */
>> > +
>> > +extern struct lk_private_data *lk_init_addr (const char *name,
>> > +                                        const char *alias, int
>> > silent); +extern struct lk_private_data *lk_init_struct (const char *name,
>> > +                                          const char *alias, int
>> > silent);
>>
>> > +
>> > +/* Definitions for architecture dependent hooks.  */
>> > +/* Hook to read registers from the target and supply their content
>> > +   to the regcache.  */
>> > +typedef void (*lk_hook_get_registers) (CORE_ADDR task,
>> > +                                  struct target_ops *target,
>> > +                                  struct regcache *regcache,
>> > +                                  int regnum);
>> > +
>> > +/* Hook to return the per_cpu_offset of cpu CPU.  Only architectures that
>> > +   do not use the __per_cpu_offset array to determin the offset have to
>> > +   supply this hook.  */
>> > +typedef CORE_ADDR (*lk_hook_get_percpu_offset) (unsigned int cpu);
>> > +
>> > +/* Hook to map a running task to a logical CPU.  Required if the target
>> > +   beneath uses a different PID as struct rq.  */
>> > +typedef unsigned int (*lk_hook_map_running_task_to_cpu) (struct
>> > thread_info *ti); +
>> > +struct lk_private_hooks
>> > +{
>> > +  /* required */
>> > +  lk_hook_get_registers get_registers;
>> > +
>> > +  /* optional, required if __per_cpu_offset array is not used to determine
>> > +     offset.  */
>> > +  lk_hook_get_percpu_offset get_percpu_offset;
>> > +
>> > +  /* optional, required if the target beneath uses a different PID as
>> > struct
>> > +     rq.  */
>> > +  lk_hook_map_running_task_to_cpu map_running_task_to_cpu;
>> > +};
>>
>
  
Philipp Rudo May 8, 2017, 11:22 a.m. UTC | #11
Hi Omair,

On Mon, 8 May 2017 04:54:16 +0500
Omair Javaid <omair.javaid@linaro.org> wrote:

> Hi Phillip,
> 
> Thanks for writing back. I hope you are feeling better now.

Thanks. It will take some more time for me to get 100% fit again but at least
the worst is over ...

> I am trying to manage our basic live thread implementation within the
> limits you have set out in your patches.
> 
> However I am interested in knowing what are your plans for immediate
> future like next couple of weeks.
>
> If you are not planning on making any particular design changes to the
> current version of your patches then probably I will continue working
> using your patches as base.

My current plan is to finish off the work that has piled up during the two
weeks I was sick.  After that I will clean up my kernel stack unwinder for
s390 so I have that finally gone (it already took way too much time).

From then I don't have a fixed plan.  On my bucket list there are some items
without particular order and different impact to the interfaces.  They are

* Rebase to current master.
  With all the C++-yfication this will most likely lead to some minor changes.

* C++-fy the target itself.
  As Yao mentioned in his mail [1] it would be better to classify
  struct lk_private to better fit the direction GDB is currently going to.
  In this process I would also get rid of some cleanups and further adept the
  new C++ features.  Overall this will change some (but hopefully not
  many) interfaces.  The biggest change will most likely be from function
  hooks (in struct lk_private_hooks) to virtual class methods (in lk_private).

* Make LK_DECLARE_* macros more flexible.
  Currently lk_init_private_data aborts once any declared symbol cannot be
  found.  This also makes the whole target unusable if e.g. the kernel is
  compiled without CONFIG_MODULES as then some symbols needed for module
  support cannot be found.  My idea is to assign symbols to GDB-features e.g.
  module support and only turn off those features if a symbol could not be
  found.

* Design a proper CLI (i.e. functions like dmesg etc.).
  This will be needed if we want others to actually use the feature.  Shouldn't
  have any impact on you.

* Implement separate thread_lists.
  Allow every target to manage its own thread_list.  Heavy impact for you and a
  lot work for me...

* Implement different target views.
  Allow the user to switch between different target views (e.g. linux_kernel
  and core/remote) and thus define the wanted level of abstraction.  Even worse
  then the separate thread_lists...

Long story short you don't have to divert away from my patches.  Even if I
start working on the separate thread_lists next it will definitely take quite a
lot of time to implement.  So no matter what you will most likely have a working
patch before me ;)

I hope I answered all your questions.

Philipp

[1] https://sourceware.org/ml/gdb-patches/2017-05/msg00004.html

> Otherwise if you plan to make any further changes like going for a
> separate thread list implementation for all layers of targets then i
> can also divert away from your patches for a while untill next update
> is posted.
> 
> I am already diverting away from Peter's original implementation
> because of some basic limitations pointed out during previous reviews.
> I dont have reliable solution right now but trying to find one lets
> see if i can manage to upgrade this current hack for live threads as
> well.
> 
> --
> Omair.
> 
> On 3 May 2017 at 20:36, Philipp Rudo <prudo@linux.vnet.ibm.com> wrote:r
> > Hi Yao,
> >
> >
> > On Tue, 02 May 2017 12:14:40 +0100
> > Yao Qi <qiyaoltc@gmail.com> wrote:
> >  
> >> Philipp Rudo <prudo@linux.vnet.ibm.com> writes:
> >>
> >> Hi Philipp,
> >>  
> >> > +/* Initialize architecture independent private data.  Must be called
> >> > +   _after_ symbol tables were initialized.  */
> >> > +
> >> > +static void
> >> > +lk_init_private_data ()
> >> > +{
> >> > +  if (LK_PRIVATE->data != NULL)
> >> > +    htab_empty (LK_PRIVATE->data);
> >> > +
> >> > +  LK_DECLARE_FIELD (task_struct, tasks);
> >> > +  LK_DECLARE_FIELD (task_struct, pid);
> >> > +  LK_DECLARE_FIELD (task_struct, tgid);
> >> > +  LK_DECLARE_FIELD (task_struct, thread_group);
> >> > +  LK_DECLARE_FIELD (task_struct, comm);
> >> > +  LK_DECLARE_FIELD (task_struct, thread);
> >> > +
> >> > +  LK_DECLARE_FIELD (list_head, next);
> >> > +  LK_DECLARE_FIELD (list_head, prev);
> >> > +
> >> > +  LK_DECLARE_FIELD (rq, curr);
> >> > +
> >> > +  LK_DECLARE_FIELD (cpumask, bits);
> >> > +
> >> > +  LK_DECLARE_ADDR (init_task);
> >> > +  LK_DECLARE_ADDR (runqueues);
> >> > +  LK_DECLARE_ADDR (__per_cpu_offset);
> >> > +  LK_DECLARE_ADDR (init_mm);
> >> > +
> >> > +  LK_DECLARE_ADDR_ALIAS (__cpu_online_mask, cpu_online_mask);      /*
> >> > linux 4.5+ */
> >> > +  LK_DECLARE_ADDR_ALIAS (cpu_online_bits, cpu_online_mask);        /*
> >> > linux -4.4 */
> >> > +  if (LK_ADDR (cpu_online_mask) == -1)
> >> > +    error (_("Could not find address cpu_online_mask.  Aborting."));
> >> > +}
> >> > +  
> >>  
> >> > +
> >> > +/* Initialize linux kernel target.  */
> >> > +
> >> > +static void
> >> > +init_linux_kernel_ops (void)
> >> > +{
> >> > +  struct target_ops *t;
> >> > +
> >> > +  if (linux_kernel_ops != NULL)
> >> > +    return;
> >> > +
> >> > +  t = XCNEW (struct target_ops);
> >> > +  t->to_shortname = "linux-kernel";
> >> > +  t->to_longname = "linux kernel support";
> >> > +  t->to_doc = "Adds support to debug the Linux kernel";
> >> > +
> >> > +  /* set t->to_data = struct lk_private in lk_init_private.  */
> >> > +
> >> > +  t->to_open = lk_open;
> >> > +  t->to_close = lk_close;
> >> > +  t->to_detach = lk_detach;
> >> > +  t->to_fetch_registers = lk_fetch_registers;
> >> > +  t->to_update_thread_list = lk_update_thread_list;
> >> > +  t->to_pid_to_str = lk_pid_to_str;
> >> > +  t->to_thread_name = lk_thread_name;
> >> > +
> >> > +  t->to_stratum = thread_stratum;
> >> > +  t->to_magic = OPS_MAGIC;
> >> > +
> >> > +  linux_kernel_ops = t;
> >> > +
> >> > +  add_target (t);
> >> > +}
> >> > +
> >> > +/* Provide a prototype to silence -Wmissing-prototypes.  */
> >> > +extern initialize_file_ftype _initialize_linux_kernel;
> >> > +
> >> > +void
> >> > +_initialize_linux_kernel (void)
> >> > +{
> >> > +  init_linux_kernel_ops ();
> >> > +
> >> > +  observer_attach_new_objfile (lk_observer_new_objfile);
> >> > +  observer_attach_inferior_created (lk_observer_inferior_created);
> >> > +}
> >> > diff --git a/gdb/lk-low.h b/gdb/lk-low.h
> >> > new file mode 100644
> >> > index 0000000..292ef97
> >> > --- /dev/null
> >> > +++ b/gdb/lk-low.h
> >> > @@ -0,0 +1,310 @@
> >> > +/* Basic Linux kernel support, architecture independent.
> >> > +
> >> > +   Copyright (C) 2016 Free Software Foundation, Inc.
> >> > +
> >> > +   This file is part of GDB.
> >> > +
> >> > +   This program is free software; you can redistribute it and/or modify
> >> > +   it under the terms of the GNU General Public License as published by
> >> > +   the Free Software Foundation; either version 3 of the License, or
> >> > +   (at your option) any later version.
> >> > +
> >> > +   This program is distributed in the hope that it will be useful,
> >> > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> >> > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> >> > +   GNU General Public License for more details.
> >> > +
> >> > +   You should have received a copy of the GNU General Public License
> >> > +   along with this program.  If not, see <http://www.gnu.org/licenses/>.
> >> > */ +
> >> > +#ifndef __LK_LOW_H__
> >> > +#define __LK_LOW_H__
> >> > +
> >> > +#include "target.h"
> >> > +
> >> > +extern struct target_ops *linux_kernel_ops;
> >> > +
> >> > +/* Copy constants defined in Linux kernel.  */
> >> > +#define LK_TASK_COMM_LEN 16
> >> > +#define LK_BITS_PER_BYTE 8
> >> > +
> >> > +/* Definitions used in linux kernel target.  */
> >> > +#define LK_CPU_INVAL -1U
> >> > +
> >> > +/* Private data structs for this target.  */
> >> > +/* Forward declarations.  */
> >> > +struct lk_private_hooks;
> >> > +struct lk_ptid_map;
> >> > +
> >> > +/* Short hand access to private data.  */
> >> > +#define LK_PRIVATE ((struct lk_private *) linux_kernel_ops->to_data)
> >> > +#define LK_HOOK (LK_PRIVATE->hooks)
> >> > +
> >> > +struct lk_private  
> >>
> >> "private" here is a little confusing.  How about rename it to
> >> "linux_kernel"?  
> >
> > I called it "private" as it is the targets private data stored in its
> > to_data hook.  But I don't mind renaming it.  Especially ...
> >  
> >> > +{
> >> > +  /* Hashtab for needed addresses, structs and fields.  */
> >> > +  htab_t data;
> >> > +
> >> > +  /* Linked list to map between cpu number and original ptid from target
> >> > +     beneath.  */
> >> > +  struct lk_ptid_map *old_ptid;
> >> > +
> >> > +  /* Hooks for architecture dependent functions.  */
> >> > +  struct lk_private_hooks *hooks;
> >> > +};
> >> > +  
> >>
> >> Secondly, can we change it to a class and function pointers in
> >> lk_private_hooks become virtual functions.  gdbarch_lk_init_private
> >> returns a pointer to an instance of sub-class of "linux_kernel".
> >>
> >> lk_init_private_data can be put the constructor of base class, to add
> >> entries to "data", and sub-class (in each gdbarch) can add their own
> >> specific stuff.  
> >
> > ... when classifying the struct, which already is on my long ToDo-list.
> > This struct is a left over from when I started working on the project
> > shortly before gdb-7.12 was released.  I didn't think that the
> > C++-yfication would kick off that fast and started with plain C ...
> >
> > Thanks
> > Philipp
> >  
> >> > +
> >> > +/* Functions to initialize private data.  Do not use directly, use the
> >> > +   macros below instead.  */
> >> > +
> >> > +extern struct lk_private_data *lk_init_addr (const char *name,
> >> > +                                        const char *alias, int
> >> > silent); +extern struct lk_private_data *lk_init_struct (const char
> >> > *name,
> >> > +                                          const char *alias, int
> >> > silent);  
> >>  
> >> > +
> >> > +/* Definitions for architecture dependent hooks.  */
> >> > +/* Hook to read registers from the target and supply their content
> >> > +   to the regcache.  */
> >> > +typedef void (*lk_hook_get_registers) (CORE_ADDR task,
> >> > +                                  struct target_ops *target,
> >> > +                                  struct regcache *regcache,
> >> > +                                  int regnum);
> >> > +
> >> > +/* Hook to return the per_cpu_offset of cpu CPU.  Only architectures
> >> > that
> >> > +   do not use the __per_cpu_offset array to determin the offset have to
> >> > +   supply this hook.  */
> >> > +typedef CORE_ADDR (*lk_hook_get_percpu_offset) (unsigned int cpu);
> >> > +
> >> > +/* Hook to map a running task to a logical CPU.  Required if the target
> >> > +   beneath uses a different PID as struct rq.  */
> >> > +typedef unsigned int (*lk_hook_map_running_task_to_cpu) (struct
> >> > thread_info *ti); +
> >> > +struct lk_private_hooks
> >> > +{
> >> > +  /* required */
> >> > +  lk_hook_get_registers get_registers;
> >> > +
> >> > +  /* optional, required if __per_cpu_offset array is not used to
> >> > determine
> >> > +     offset.  */
> >> > +  lk_hook_get_percpu_offset get_percpu_offset;
> >> > +
> >> > +  /* optional, required if the target beneath uses a different PID as
> >> > struct
> >> > +     rq.  */
> >> > +  lk_hook_map_running_task_to_cpu map_running_task_to_cpu;
> >> > +};  
> >>  
> >  
>
  
Philipp Rudo May 10, 2017, 9:03 a.m. UTC | #12
Hi Peter,

On Tue, 9 May 2017 09:38:03 +0100
Peter Griffin <peter.griffin@linaro.org> wrote:

> Hi Philipp,
> 
> I think as Omair mentioned previously I've moved teams in Linaro so not
> working on GDB Linux awareness directly anymore. But I'm still following

yes, Omair already mentioned and I was said to hear it.  I wish you good
luck and hope you enjoy your new challenges.

> along in the background, and would love to see this feature merged.

Me too.

> On 8 May 2017 at 12:22, Philipp Rudo <prudo@linux.vnet.ibm.com> wrote:
> 
> > Hi Omair,
> >
> > On Mon, 8 May 2017 04:54:16 +0500
> > Omair Javaid <omair.javaid@linaro.org> wrote:
> >  
> > > Hi Phillip,
> > >
> > > Thanks for writing back. I hope you are feeling better now.  
> >
> > Thanks. It will take some more time for me to get 100% fit again but at
> > least
> > the worst is over ...
> >  
> 
> Good to hear.

Thanks a lot!

> >  
> > > I am trying to manage our basic live thread implementation within the
> > > limits you have set out in your patches.
> > >
> > > However I am interested in knowing what are your plans for immediate
> > > future like next couple of weeks.
> > >
> > > If you are not planning on making any particular design changes to the
> > > current version of your patches then probably I will continue working
> > > using your patches as base.  
> >
> > My current plan is to finish off the work that has piled up during the two
> > weeks I was sick.  After that I will clean up my kernel stack unwinder for
> > s390 so I have that finally gone (it already took way too much time).
> >
> > From then I don't have a fixed plan.  On my bucket list there are some
> > items
> > without particular order and different impact to the interfaces.  They are
> >
> > * Rebase to current master.
> >   With all the C++-yfication this will most likely lead to some minor
> > changes.
> >
> > * C++-fy the target itself.
> >   As Yao mentioned in his mail [1] it would be better to classify
> >   struct lk_private to better fit the direction GDB is currently going to.
> >   In this process I would also get rid of some cleanups and further adept
> > the
> >   new C++ features.  Overall this will change some (but hopefully not
> >   many) interfaces.  The biggest change will most likely be from function
> >   hooks (in struct lk_private_hooks) to virtual class methods (in
> > lk_private).
> >
> > * Make LK_DECLARE_* macros more flexible.
> >   Currently lk_init_private_data aborts once any declared symbol cannot be
> >   found.  This also makes the whole target unusable if e.g. the kernel is
> >   compiled without CONFIG_MODULES as then some symbols needed for module
> >   support cannot be found.  My idea is to assign symbols to GDB-features
> > e.g.
> >   module support and only turn off those features if a symbol could not be
> >   found.
> >
> > * Design a proper CLI (i.e. functions like dmesg etc.).
> >   This will be needed if we want others to actually use the feature.
> > Shouldn't
> >   have any impact on you.
> >  
> 
> For dmesg and other OS helpers, don't you just want to rely on the GDB
> python
> implementations already in the kernel source?
> 
> The idea being that you reduce the cross dependencies between the kernel and
> GDB by having this code live in the kernel source tree. Obviously there are
> still
> quite a few dependencies for the thread parsing and kernel modules support
> in the Linux kernel thread layer already, but having what you can in python
> still
> reduces the number of dependencies and on-going maintennce I think.
> 
> Also in theory the python and kernel data structures should move in
> lockstep for
> a given release.

Yes, this is one possibility.  Although I must admit that I would prefer to
have at least the "core helpers" implemented in GDB.  This would guarantee a
core functionality even when you don't have the corresponding kernel sources
(when a customer sends a dump of an older distro it can sometimes be hard to
get the correct sources, especially when the distro patches the kernel...).

Furthermore an implementation within GDB can easier access GDBs internal
state.  For example my dummy implementation for lsmod also shows if GDB
couldn't find debug information for a module.  This also reduces code
duplication as the commands can access the infrastructure we need anyway to get
kernel awareness going.  Of course this could also be achieved by extending GDBs
python interface.  Finding out the "best" way to have a consistent user
interface I meant with "design" .

> >
> > * Implement separate thread_lists.
> >   Allow every target to manage its own thread_list.  Heavy impact for you
> > and a
> >   lot work for me...
> >  
> 
> That would be very neat!

For me this is not only neat but the only clean solution.  Otherwise it would
be just an other workaround for the global variables GDB uses.

Philipp

> >
> > * Implement different target views.
> >   Allow the user to switch between different target views (e.g.
> > linux_kernel
> >   and core/remote) and thus define the wanted level of abstraction.  Even
> > worse
> >   then the separate thread_lists...
> >  
> 
> as would this :)
> 
> regards,
> 
> Peter.
> 
> 
> >
> > Long story short you don't have to divert away from my patches.  Even if I
> > start working on the separate thread_lists next it will definitely take
> > quite a
> > lot of time to implement.  So no matter what you will most likely have a
> > working
> > patch before me ;)
> >
> > I hope I answered all your questions.
> >
> > Philipp
> >
> > [1] https://sourceware.org/ml/gdb-patches/2017-05/msg00004.html
> >  
> > > Otherwise if you plan to make any further changes like going for a
> > > separate thread list implementation for all layers of targets then i
> > > can also divert away from your patches for a while untill next update
> > > is posted.
> > >
> > > I am already diverting away from Peter's original implementation
> > > because of some basic limitations pointed out during previous reviews.
> > > I dont have reliable solution right now but trying to find one lets
> > > see if i can manage to upgrade this current hack for live threads as
> > > well.
> > >
> > > --
> > > Omair.
> > >
> > > On 3 May 2017 at 20:36, Philipp Rudo <prudo@linux.vnet.ibm.com> wrote:r  
> > > > Hi Yao,
> > > >
> > > >
> > > > On Tue, 02 May 2017 12:14:40 +0100
> > > > Yao Qi <qiyaoltc@gmail.com> wrote:
> > > >  
> > > >> Philipp Rudo <prudo@linux.vnet.ibm.com> writes:
> > > >>
> > > >> Hi Philipp,
> > > >>  
> > > >> > +/* Initialize architecture independent private data.  Must be  
> > called  
> > > >> > +   _after_ symbol tables were initialized.  */
> > > >> > +
> > > >> > +static void
> > > >> > +lk_init_private_data ()
> > > >> > +{
> > > >> > +  if (LK_PRIVATE->data != NULL)
> > > >> > +    htab_empty (LK_PRIVATE->data);
> > > >> > +
> > > >> > +  LK_DECLARE_FIELD (task_struct, tasks);
> > > >> > +  LK_DECLARE_FIELD (task_struct, pid);
> > > >> > +  LK_DECLARE_FIELD (task_struct, tgid);
> > > >> > +  LK_DECLARE_FIELD (task_struct, thread_group);
> > > >> > +  LK_DECLARE_FIELD (task_struct, comm);
> > > >> > +  LK_DECLARE_FIELD (task_struct, thread);
> > > >> > +
> > > >> > +  LK_DECLARE_FIELD (list_head, next);
> > > >> > +  LK_DECLARE_FIELD (list_head, prev);
> > > >> > +
> > > >> > +  LK_DECLARE_FIELD (rq, curr);
> > > >> > +
> > > >> > +  LK_DECLARE_FIELD (cpumask, bits);
> > > >> > +
> > > >> > +  LK_DECLARE_ADDR (init_task);
> > > >> > +  LK_DECLARE_ADDR (runqueues);
> > > >> > +  LK_DECLARE_ADDR (__per_cpu_offset);
> > > >> > +  LK_DECLARE_ADDR (init_mm);
> > > >> > +
> > > >> > +  LK_DECLARE_ADDR_ALIAS (__cpu_online_mask, cpu_online_mask);  
> > /*  
> > > >> > linux 4.5+ */
> > > >> > +  LK_DECLARE_ADDR_ALIAS (cpu_online_bits, cpu_online_mask);  
> > /*  
> > > >> > linux -4.4 */
> > > >> > +  if (LK_ADDR (cpu_online_mask) == -1)
> > > >> > +    error (_("Could not find address cpu_online_mask.  
> > Aborting."));  
> > > >> > +}
> > > >> > +  
> > > >>  
> > > >> > +
> > > >> > +/* Initialize linux kernel target.  */
> > > >> > +
> > > >> > +static void
> > > >> > +init_linux_kernel_ops (void)
> > > >> > +{
> > > >> > +  struct target_ops *t;
> > > >> > +
> > > >> > +  if (linux_kernel_ops != NULL)
> > > >> > +    return;
> > > >> > +
> > > >> > +  t = XCNEW (struct target_ops);
> > > >> > +  t->to_shortname = "linux-kernel";
> > > >> > +  t->to_longname = "linux kernel support";
> > > >> > +  t->to_doc = "Adds support to debug the Linux kernel";
> > > >> > +
> > > >> > +  /* set t->to_data = struct lk_private in lk_init_private.  */
> > > >> > +
> > > >> > +  t->to_open = lk_open;
> > > >> > +  t->to_close = lk_close;
> > > >> > +  t->to_detach = lk_detach;
> > > >> > +  t->to_fetch_registers = lk_fetch_registers;
> > > >> > +  t->to_update_thread_list = lk_update_thread_list;
> > > >> > +  t->to_pid_to_str = lk_pid_to_str;
> > > >> > +  t->to_thread_name = lk_thread_name;
> > > >> > +
> > > >> > +  t->to_stratum = thread_stratum;
> > > >> > +  t->to_magic = OPS_MAGIC;
> > > >> > +
> > > >> > +  linux_kernel_ops = t;
> > > >> > +
> > > >> > +  add_target (t);
> > > >> > +}
> > > >> > +
> > > >> > +/* Provide a prototype to silence -Wmissing-prototypes.  */
> > > >> > +extern initialize_file_ftype _initialize_linux_kernel;
> > > >> > +
> > > >> > +void
> > > >> > +_initialize_linux_kernel (void)
> > > >> > +{
> > > >> > +  init_linux_kernel_ops ();
> > > >> > +
> > > >> > +  observer_attach_new_objfile (lk_observer_new_objfile);
> > > >> > +  observer_attach_inferior_created (lk_observer_inferior_created);
> > > >> > +}
> > > >> > diff --git a/gdb/lk-low.h b/gdb/lk-low.h
> > > >> > new file mode 100644
> > > >> > index 0000000..292ef97
> > > >> > --- /dev/null
> > > >> > +++ b/gdb/lk-low.h
> > > >> > @@ -0,0 +1,310 @@
> > > >> > +/* Basic Linux kernel support, architecture independent.
> > > >> > +
> > > >> > +   Copyright (C) 2016 Free Software Foundation, Inc.
> > > >> > +
> > > >> > +   This file is part of GDB.
> > > >> > +
> > > >> > +   This program is free software; you can redistribute it and/or  
> > modify  
> > > >> > +   it under the terms of the GNU General Public License as  
> > published by  
> > > >> > +   the Free Software Foundation; either version 3 of the License,  
> > or  
> > > >> > +   (at your option) any later version.
> > > >> > +
> > > >> > +   This program is distributed in the hope that it will be useful,
> > > >> > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > >> > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > > >> > +   GNU General Public License for more details.
> > > >> > +
> > > >> > +   You should have received a copy of the GNU General Public  
> > License  
> > > >> > +   along with this program.  If not, see <  
> > http://www.gnu.org/licenses/>.  
> > > >> > */ +
> > > >> > +#ifndef __LK_LOW_H__
> > > >> > +#define __LK_LOW_H__
> > > >> > +
> > > >> > +#include "target.h"
> > > >> > +
> > > >> > +extern struct target_ops *linux_kernel_ops;
> > > >> > +
> > > >> > +/* Copy constants defined in Linux kernel.  */
> > > >> > +#define LK_TASK_COMM_LEN 16
> > > >> > +#define LK_BITS_PER_BYTE 8
> > > >> > +
> > > >> > +/* Definitions used in linux kernel target.  */
> > > >> > +#define LK_CPU_INVAL -1U
> > > >> > +
> > > >> > +/* Private data structs for this target.  */
> > > >> > +/* Forward declarations.  */
> > > >> > +struct lk_private_hooks;
> > > >> > +struct lk_ptid_map;
> > > >> > +
> > > >> > +/* Short hand access to private data.  */
> > > >> > +#define LK_PRIVATE ((struct lk_private *)  
> > linux_kernel_ops->to_data)  
> > > >> > +#define LK_HOOK (LK_PRIVATE->hooks)
> > > >> > +
> > > >> > +struct lk_private  
> > > >>
> > > >> "private" here is a little confusing.  How about rename it to
> > > >> "linux_kernel"?  
> > > >
> > > > I called it "private" as it is the targets private data stored in its
> > > > to_data hook.  But I don't mind renaming it.  Especially ...
> > > >  
> > > >> > +{
> > > >> > +  /* Hashtab for needed addresses, structs and fields.  */
> > > >> > +  htab_t data;
> > > >> > +
> > > >> > +  /* Linked list to map between cpu number and original ptid from  
> > target  
> > > >> > +     beneath.  */
> > > >> > +  struct lk_ptid_map *old_ptid;
> > > >> > +
> > > >> > +  /* Hooks for architecture dependent functions.  */
> > > >> > +  struct lk_private_hooks *hooks;
> > > >> > +};
> > > >> > +  
> > > >>
> > > >> Secondly, can we change it to a class and function pointers in
> > > >> lk_private_hooks become virtual functions.  gdbarch_lk_init_private
> > > >> returns a pointer to an instance of sub-class of "linux_kernel".
> > > >>
> > > >> lk_init_private_data can be put the constructor of base class, to add
> > > >> entries to "data", and sub-class (in each gdbarch) can add their own
> > > >> specific stuff.  
> > > >
> > > > ... when classifying the struct, which already is on my long ToDo-list.
> > > > This struct is a left over from when I started working on the project
> > > > shortly before gdb-7.12 was released.  I didn't think that the
> > > > C++-yfication would kick off that fast and started with plain C ...
> > > >
> > > > Thanks
> > > > Philipp
> > > >  
> > > >> > +
> > > >> > +/* Functions to initialize private data.  Do not use directly, use  
> > the  
> > > >> > +   macros below instead.  */
> > > >> > +
> > > >> > +extern struct lk_private_data *lk_init_addr (const char *name,
> > > >> > +                                        const char *alias, int
> > > >> > silent); +extern struct lk_private_data *lk_init_struct (const char
> > > >> > *name,
> > > >> > +                                          const char *alias, int
> > > >> > silent);  
> > > >>  
> > > >> > +
> > > >> > +/* Definitions for architecture dependent hooks.  */
> > > >> > +/* Hook to read registers from the target and supply their content
> > > >> > +   to the regcache.  */
> > > >> > +typedef void (*lk_hook_get_registers) (CORE_ADDR task,
> > > >> > +                                  struct target_ops *target,
> > > >> > +                                  struct regcache *regcache,
> > > >> > +                                  int regnum);
> > > >> > +
> > > >> > +/* Hook to return the per_cpu_offset of cpu CPU.  Only  
> > architectures  
> > > >> > that
> > > >> > +   do not use the __per_cpu_offset array to determin the offset  
> > have to  
> > > >> > +   supply this hook.  */
> > > >> > +typedef CORE_ADDR (*lk_hook_get_percpu_offset) (unsigned int cpu);
> > > >> > +
> > > >> > +/* Hook to map a running task to a logical CPU.  Required if the  
> > target  
> > > >> > +   beneath uses a different PID as struct rq.  */
> > > >> > +typedef unsigned int (*lk_hook_map_running_task_to_cpu) (struct
> > > >> > thread_info *ti); +
> > > >> > +struct lk_private_hooks
> > > >> > +{
> > > >> > +  /* required */
> > > >> > +  lk_hook_get_registers get_registers;
> > > >> > +
> > > >> > +  /* optional, required if __per_cpu_offset array is not used to
> > > >> > determine
> > > >> > +     offset.  */
> > > >> > +  lk_hook_get_percpu_offset get_percpu_offset;
> > > >> > +
> > > >> > +  /* optional, required if the target beneath uses a different PID  
> > as  
> > > >> > struct
> > > >> > +     rq.  */
> > > >> > +  lk_hook_map_running_task_to_cpu map_running_task_to_cpu;
> > > >> > +};  
> > > >>  
> > > >  
> > >  
> >
> >
  
Philipp Rudo May 10, 2017, 9:36 a.m. UTC | #13
Hi Omair,

I forgot one thin on my bucket list.

* The way the module support maps module names to paths to .ko files should be
  improved.
  Currently I parse <solib-search-path>/modules.order (usually
  <solib-search-path> = /lib/modules/$(uname -r)) to do so and load the path 
  relative <solib-search-path>. The problem is that for ubuntu the files
  in /lib/modules/... are stripped off their debuginfo.  And thus GDB
  complains that it cannot load the modules symbols.  The full files (including
  debuginfo) can be found under /usr/lib/debug/lib/modules/$(uname -r)/ but this
  directory doesn't contain the modules.order file ...
  There is a simple workaround by copying modules.order to /usr/lib/debug/...,
  nevertheless it would be nicer if the mapping would be more robust.

Philipp

On Mon, 8 May 2017 13:22:04 +0200
Philipp Rudo <prudo@linux.vnet.ibm.com> wrote:

> Hi Omair,
> 
> On Mon, 8 May 2017 04:54:16 +0500
> Omair Javaid <omair.javaid@linaro.org> wrote:
> 
> > Hi Phillip,
> > 
> > Thanks for writing back. I hope you are feeling better now.  
> 
> Thanks. It will take some more time for me to get 100% fit again but at least
> the worst is over ...
> 
> > I am trying to manage our basic live thread implementation within the
> > limits you have set out in your patches.
> > 
> > However I am interested in knowing what are your plans for immediate
> > future like next couple of weeks.
> >
> > If you are not planning on making any particular design changes to the
> > current version of your patches then probably I will continue working
> > using your patches as base.  
> 
> My current plan is to finish off the work that has piled up during the two
> weeks I was sick.  After that I will clean up my kernel stack unwinder for
> s390 so I have that finally gone (it already took way too much time).
> 
> From then I don't have a fixed plan.  On my bucket list there are some items
> without particular order and different impact to the interfaces.  They are
> 
> * Rebase to current master.
>   With all the C++-yfication this will most likely lead to some minor changes.
> 
> * C++-fy the target itself.
>   As Yao mentioned in his mail [1] it would be better to classify
>   struct lk_private to better fit the direction GDB is currently going to.
>   In this process I would also get rid of some cleanups and further adept the
>   new C++ features.  Overall this will change some (but hopefully not
>   many) interfaces.  The biggest change will most likely be from function
>   hooks (in struct lk_private_hooks) to virtual class methods (in lk_private).
> 
> * Make LK_DECLARE_* macros more flexible.
>   Currently lk_init_private_data aborts once any declared symbol cannot be
>   found.  This also makes the whole target unusable if e.g. the kernel is
>   compiled without CONFIG_MODULES as then some symbols needed for module
>   support cannot be found.  My idea is to assign symbols to GDB-features e.g.
>   module support and only turn off those features if a symbol could not be
>   found.
> 
> * Design a proper CLI (i.e. functions like dmesg etc.).
>   This will be needed if we want others to actually use the feature.
> Shouldn't have any impact on you.
> 
> * Implement separate thread_lists.
>   Allow every target to manage its own thread_list.  Heavy impact for you and
> a lot work for me...
> 
> * Implement different target views.
>   Allow the user to switch between different target views (e.g. linux_kernel
>   and core/remote) and thus define the wanted level of abstraction.  Even
> worse then the separate thread_lists...
> 
> Long story short you don't have to divert away from my patches.  Even if I
> start working on the separate thread_lists next it will definitely take quite
> a lot of time to implement.  So no matter what you will most likely have a
> working patch before me ;)
> 
> I hope I answered all your questions.
> 
> Philipp
> 
> [1] https://sourceware.org/ml/gdb-patches/2017-05/msg00004.html
> 
> > Otherwise if you plan to make any further changes like going for a
> > separate thread list implementation for all layers of targets then i
> > can also divert away from your patches for a while untill next update
> > is posted.
> > 
> > I am already diverting away from Peter's original implementation
> > because of some basic limitations pointed out during previous reviews.
> > I dont have reliable solution right now but trying to find one lets
> > see if i can manage to upgrade this current hack for live threads as
> > well.
> > 
> > --
> > Omair.
> > 
> > On 3 May 2017 at 20:36, Philipp Rudo <prudo@linux.vnet.ibm.com> wrote:r  
> > > Hi Yao,
> > >
> > >
> > > On Tue, 02 May 2017 12:14:40 +0100
> > > Yao Qi <qiyaoltc@gmail.com> wrote:
> > >    
> > >> Philipp Rudo <prudo@linux.vnet.ibm.com> writes:
> > >>
> > >> Hi Philipp,
> > >>    
> > >> > +/* Initialize architecture independent private data.  Must be called
> > >> > +   _after_ symbol tables were initialized.  */
> > >> > +
> > >> > +static void
> > >> > +lk_init_private_data ()
> > >> > +{
> > >> > +  if (LK_PRIVATE->data != NULL)
> > >> > +    htab_empty (LK_PRIVATE->data);
> > >> > +
> > >> > +  LK_DECLARE_FIELD (task_struct, tasks);
> > >> > +  LK_DECLARE_FIELD (task_struct, pid);
> > >> > +  LK_DECLARE_FIELD (task_struct, tgid);
> > >> > +  LK_DECLARE_FIELD (task_struct, thread_group);
> > >> > +  LK_DECLARE_FIELD (task_struct, comm);
> > >> > +  LK_DECLARE_FIELD (task_struct, thread);
> > >> > +
> > >> > +  LK_DECLARE_FIELD (list_head, next);
> > >> > +  LK_DECLARE_FIELD (list_head, prev);
> > >> > +
> > >> > +  LK_DECLARE_FIELD (rq, curr);
> > >> > +
> > >> > +  LK_DECLARE_FIELD (cpumask, bits);
> > >> > +
> > >> > +  LK_DECLARE_ADDR (init_task);
> > >> > +  LK_DECLARE_ADDR (runqueues);
> > >> > +  LK_DECLARE_ADDR (__per_cpu_offset);
> > >> > +  LK_DECLARE_ADDR (init_mm);
> > >> > +
> > >> > +  LK_DECLARE_ADDR_ALIAS (__cpu_online_mask, cpu_online_mask);      /*
> > >> > linux 4.5+ */
> > >> > +  LK_DECLARE_ADDR_ALIAS (cpu_online_bits, cpu_online_mask);        /*
> > >> > linux -4.4 */
> > >> > +  if (LK_ADDR (cpu_online_mask) == -1)
> > >> > +    error (_("Could not find address cpu_online_mask.  Aborting."));
> > >> > +}
> > >> > +    
> > >>    
> > >> > +
> > >> > +/* Initialize linux kernel target.  */
> > >> > +
> > >> > +static void
> > >> > +init_linux_kernel_ops (void)
> > >> > +{
> > >> > +  struct target_ops *t;
> > >> > +
> > >> > +  if (linux_kernel_ops != NULL)
> > >> > +    return;
> > >> > +
> > >> > +  t = XCNEW (struct target_ops);
> > >> > +  t->to_shortname = "linux-kernel";
> > >> > +  t->to_longname = "linux kernel support";
> > >> > +  t->to_doc = "Adds support to debug the Linux kernel";
> > >> > +
> > >> > +  /* set t->to_data = struct lk_private in lk_init_private.  */
> > >> > +
> > >> > +  t->to_open = lk_open;
> > >> > +  t->to_close = lk_close;
> > >> > +  t->to_detach = lk_detach;
> > >> > +  t->to_fetch_registers = lk_fetch_registers;
> > >> > +  t->to_update_thread_list = lk_update_thread_list;
> > >> > +  t->to_pid_to_str = lk_pid_to_str;
> > >> > +  t->to_thread_name = lk_thread_name;
> > >> > +
> > >> > +  t->to_stratum = thread_stratum;
> > >> > +  t->to_magic = OPS_MAGIC;
> > >> > +
> > >> > +  linux_kernel_ops = t;
> > >> > +
> > >> > +  add_target (t);
> > >> > +}
> > >> > +
> > >> > +/* Provide a prototype to silence -Wmissing-prototypes.  */
> > >> > +extern initialize_file_ftype _initialize_linux_kernel;
> > >> > +
> > >> > +void
> > >> > +_initialize_linux_kernel (void)
> > >> > +{
> > >> > +  init_linux_kernel_ops ();
> > >> > +
> > >> > +  observer_attach_new_objfile (lk_observer_new_objfile);
> > >> > +  observer_attach_inferior_created (lk_observer_inferior_created);
> > >> > +}
> > >> > diff --git a/gdb/lk-low.h b/gdb/lk-low.h
> > >> > new file mode 100644
> > >> > index 0000000..292ef97
> > >> > --- /dev/null
> > >> > +++ b/gdb/lk-low.h
> > >> > @@ -0,0 +1,310 @@
> > >> > +/* Basic Linux kernel support, architecture independent.
> > >> > +
> > >> > +   Copyright (C) 2016 Free Software Foundation, Inc.
> > >> > +
> > >> > +   This file is part of GDB.
> > >> > +
> > >> > +   This program is free software; you can redistribute it and/or
> > >> > modify
> > >> > +   it under the terms of the GNU General Public License as published
> > >> > by
> > >> > +   the Free Software Foundation; either version 3 of the License, or
> > >> > +   (at your option) any later version.
> > >> > +
> > >> > +   This program is distributed in the hope that it will be useful,
> > >> > +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> > >> > +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> > >> > +   GNU General Public License for more details.
> > >> > +
> > >> > +   You should have received a copy of the GNU General Public License
> > >> > +   along with this program.  If not, see
> > >> > <http://www.gnu.org/licenses/>. */ +
> > >> > +#ifndef __LK_LOW_H__
> > >> > +#define __LK_LOW_H__
> > >> > +
> > >> > +#include "target.h"
> > >> > +
> > >> > +extern struct target_ops *linux_kernel_ops;
> > >> > +
> > >> > +/* Copy constants defined in Linux kernel.  */
> > >> > +#define LK_TASK_COMM_LEN 16
> > >> > +#define LK_BITS_PER_BYTE 8
> > >> > +
> > >> > +/* Definitions used in linux kernel target.  */
> > >> > +#define LK_CPU_INVAL -1U
> > >> > +
> > >> > +/* Private data structs for this target.  */
> > >> > +/* Forward declarations.  */
> > >> > +struct lk_private_hooks;
> > >> > +struct lk_ptid_map;
> > >> > +
> > >> > +/* Short hand access to private data.  */
> > >> > +#define LK_PRIVATE ((struct lk_private *) linux_kernel_ops->to_data)
> > >> > +#define LK_HOOK (LK_PRIVATE->hooks)
> > >> > +
> > >> > +struct lk_private    
> > >>
> > >> "private" here is a little confusing.  How about rename it to
> > >> "linux_kernel"?    
> > >
> > > I called it "private" as it is the targets private data stored in its
> > > to_data hook.  But I don't mind renaming it.  Especially ...
> > >    
> > >> > +{
> > >> > +  /* Hashtab for needed addresses, structs and fields.  */
> > >> > +  htab_t data;
> > >> > +
> > >> > +  /* Linked list to map between cpu number and original ptid from
> > >> > target
> > >> > +     beneath.  */
> > >> > +  struct lk_ptid_map *old_ptid;
> > >> > +
> > >> > +  /* Hooks for architecture dependent functions.  */
> > >> > +  struct lk_private_hooks *hooks;
> > >> > +};
> > >> > +    
> > >>
> > >> Secondly, can we change it to a class and function pointers in
> > >> lk_private_hooks become virtual functions.  gdbarch_lk_init_private
> > >> returns a pointer to an instance of sub-class of "linux_kernel".
> > >>
> > >> lk_init_private_data can be put the constructor of base class, to add
> > >> entries to "data", and sub-class (in each gdbarch) can add their own
> > >> specific stuff.    
> > >
> > > ... when classifying the struct, which already is on my long ToDo-list.
> > > This struct is a left over from when I started working on the project
> > > shortly before gdb-7.12 was released.  I didn't think that the
> > > C++-yfication would kick off that fast and started with plain C ...
> > >
> > > Thanks
> > > Philipp
> > >    
> > >> > +
> > >> > +/* Functions to initialize private data.  Do not use directly, use the
> > >> > +   macros below instead.  */
> > >> > +
> > >> > +extern struct lk_private_data *lk_init_addr (const char *name,
> > >> > +                                        const char *alias, int
> > >> > silent); +extern struct lk_private_data *lk_init_struct (const char
> > >> > *name,
> > >> > +                                          const char *alias, int
> > >> > silent);    
> > >>    
> > >> > +
> > >> > +/* Definitions for architecture dependent hooks.  */
> > >> > +/* Hook to read registers from the target and supply their content
> > >> > +   to the regcache.  */
> > >> > +typedef void (*lk_hook_get_registers) (CORE_ADDR task,
> > >> > +                                  struct target_ops *target,
> > >> > +                                  struct regcache *regcache,
> > >> > +                                  int regnum);
> > >> > +
> > >> > +/* Hook to return the per_cpu_offset of cpu CPU.  Only architectures
> > >> > that
> > >> > +   do not use the __per_cpu_offset array to determin the offset have
> > >> > to
> > >> > +   supply this hook.  */
> > >> > +typedef CORE_ADDR (*lk_hook_get_percpu_offset) (unsigned int cpu);
> > >> > +
> > >> > +/* Hook to map a running task to a logical CPU.  Required if the
> > >> > target
> > >> > +   beneath uses a different PID as struct rq.  */
> > >> > +typedef unsigned int (*lk_hook_map_running_task_to_cpu) (struct
> > >> > thread_info *ti); +
> > >> > +struct lk_private_hooks
> > >> > +{
> > >> > +  /* required */
> > >> > +  lk_hook_get_registers get_registers;
> > >> > +
> > >> > +  /* optional, required if __per_cpu_offset array is not used to
> > >> > determine
> > >> > +     offset.  */
> > >> > +  lk_hook_get_percpu_offset get_percpu_offset;
> > >> > +
> > >> > +  /* optional, required if the target beneath uses a different PID as
> > >> > struct
> > >> > +     rq.  */
> > >> > +  lk_hook_map_running_task_to_cpu map_running_task_to_cpu;
> > >> > +};    
> > >>    
> > >    
> >   
>
  
Yao Qi May 19, 2017, 8:45 a.m. UTC | #14
Philipp Rudo <prudo@linux.vnet.ibm.com> writes:

> * Implement separate thread_lists.
>   Allow every target to manage its own thread_list.  Heavy impact for you and a
>   lot work for me...

Hi Philipp,
before you spend a lot of time implementing this, it is better to start
an RFC discussion on an appropriate time, so that people can well
understand why do we need this change.
  
Andreas Arnez May 19, 2017, 3:24 p.m. UTC | #15
On Fri, May 19 2017, Yao Qi wrote:

> Philipp Rudo <prudo@linux.vnet.ibm.com> writes:
>
>> * Implement separate thread_lists.
>>   Allow every target to manage its own thread_list.  Heavy impact for you and a
>>   lot work for me...
>
> Hi Philipp,
> before you spend a lot of time implementing this, it is better to start
> an RFC discussion on an appropriate time, so that people can well
> understand why do we need this change.

FYI, I have just started investigating this a bit.

The reason for multiple thread lists has been covered in some of the
discussions already, but let me give my few cents.

In the kernel live debug scenario we conceptually have two different
thread models layered on top of each other:

* LK target: Thread == Linux kernel thread
* Remote target: Thread == CPU

If we represent CPUs and Linux threads in a single thread list, then it
becomes difficult to maintain consistency between the LK target and the
remote target: Who owns which parts of the thread_info?  How to
guarantee unique ptids across the board, etc.?  Not to speak of the
confusing "info threads" output if CPUs and threads are munged
together.

Unfortunately many places in GDB assume that there is just one thread
list, one active target and one current inferior/thread.  In order to
maintain multiple thread lists cleanly, we probably have to lift these
restrictions and get rid of the global variables current_target,
thread_list, inferior_ptid, etc., or most of their uses.  That's my
preliminary conclusion, anyway.  Alternate suggestions are welcome.

--
Andreas
  
John Baldwin May 19, 2017, 4:24 p.m. UTC | #16
On Friday, May 19, 2017 05:24:09 PM Andreas Arnez wrote:
> On Fri, May 19 2017, Yao Qi wrote:
> 
> > Philipp Rudo <prudo@linux.vnet.ibm.com> writes:
> >
> >> * Implement separate thread_lists.
> >>   Allow every target to manage its own thread_list.  Heavy impact for you and a
> >>   lot work for me...
> >
> > Hi Philipp,
> > before you spend a lot of time implementing this, it is better to start
> > an RFC discussion on an appropriate time, so that people can well
> > understand why do we need this change.
> 
> FYI, I have just started investigating this a bit.
> 
> The reason for multiple thread lists has been covered in some of the
> discussions already, but let me give my few cents.
> 
> In the kernel live debug scenario we conceptually have two different
> thread models layered on top of each other:
> 
> * LK target: Thread == Linux kernel thread
> * Remote target: Thread == CPU
> 
> If we represent CPUs and Linux threads in a single thread list, then it
> becomes difficult to maintain consistency between the LK target and the
> remote target: Who owns which parts of the thread_info?  How to
> guarantee unique ptids across the board, etc.?  Not to speak of the
> confusing "info threads" output if CPUs and threads are munged
> together.
> 
> Unfortunately many places in GDB assume that there is just one thread
> list, one active target and one current inferior/thread.  In order to
> maintain multiple thread lists cleanly, we probably have to lift these
> restrictions and get rid of the global variables current_target,
> thread_list, inferior_ptid, etc., or most of their uses.  That's my
> preliminary conclusion, anyway.  Alternate suggestions are welcome.

FreeBSD's kernel GDB bits (which I maintain) have a similar issue, though for
now we only export kernel threads as threads in GDB and don't support CPUs as
a GDB-visible thing.  In some ways the model I would personally like would be
to have conceptual "layers" that you can bounce up and down between kind of
like a stack, but in this case a stack of thread targets, so that I could do
a kind of 'thread_down' and now 'info threads' would only show me CPUs, allow
me to select CPUs, etc. but then have a 'thread_up' to pop back up to the
kernel thread layer.  The best model I can think of is that this is similar
to M:N user-thread implementations where you have user threads multiplexed
onto LWPs.  In such a world (which I'm not sure many OS's use these days) it
would also be nice to kind of bounce between the worlds.  (In fact, the
model I have been toying with but have not yet implemented for adapting
FreeBSD's current kernel target support to qemu or the GDB stub I'm hacking
on for FreeBSD's native bhyve hypervisor would be to treat vCPUs as LWPs
so their ptid would have lwp == vcpu, and kernel-level threads as "threads",
so their ptid would have tid == kernel thread id).
  
Andreas Arnez May 19, 2017, 5:05 p.m. UTC | #17
On Fri, May 19 2017, John Baldwin wrote:

> FreeBSD's kernel GDB bits (which I maintain) have a similar issue, though for
> now we only export kernel threads as threads in GDB and don't support CPUs as
> a GDB-visible thing.  In some ways the model I would personally like would be
> to have conceptual "layers" that you can bounce up and down between kind of
> like a stack, but in this case a stack of thread targets, so that I could do
> a kind of 'thread_down' and now 'info threads' would only show me CPUs, allow
> me to select CPUs, etc. but then have a 'thread_up' to pop back up to the
> kernel thread layer.

Exactly!  Note that GDB already has a stack of "layers" -- the target
stack.  Thus I'm considering commands like "target up/down" for this
purpose.  Of course this requires per-target thread lists.

> The best model I can think of is that this is similar to M:N
> user-thread implementations where you have user threads multiplexed
> onto LWPs.  In such a world (which I'm not sure many OS's use these
> days) it would also be nice to kind of bounce between the worlds.

M:N user-thread implementations have probably become more popular with
Go.  In that scenario we have the following layers:

* Threads == Goroutines (user-thread implementation)
* Threads == OS threads

> (In fact, the model I have been toying with but have not yet
> implemented for adapting FreeBSD's current kernel target support to
> qemu or the GDB stub I'm hacking on for FreeBSD's native bhyve
> hypervisor would be to treat vCPUs as LWPs so their ptid would have
> lwp == vcpu, and kernel-level threads as "threads", so their ptid
> would have tid == kernel thread id).

So kernel-level threads can not be rescheduled on a different vCPU?

--
Andreas
  
John Baldwin May 19, 2017, 5:21 p.m. UTC | #18
On Friday, May 19, 2017 07:05:47 PM Andreas Arnez wrote:
> On Fri, May 19 2017, John Baldwin wrote:
> 
> > FreeBSD's kernel GDB bits (which I maintain) have a similar issue, though for
> > now we only export kernel threads as threads in GDB and don't support CPUs as
> > a GDB-visible thing.  In some ways the model I would personally like would be
> > to have conceptual "layers" that you can bounce up and down between kind of
> > like a stack, but in this case a stack of thread targets, so that I could do
> > a kind of 'thread_down' and now 'info threads' would only show me CPUs, allow
> > me to select CPUs, etc. but then have a 'thread_up' to pop back up to the
> > kernel thread layer.
> 
> Exactly!  Note that GDB already has a stack of "layers" -- the target
> stack.  Thus I'm considering commands like "target up/down" for this
> purpose.  Of course this requires per-target thread lists.

Yes, a target up/down might work.  Right now you can push/pop targets so in
theory you can do this today with "target push kthread" and then "target pop".
I hadn't played with this enough to know if that would be sufficient or not
or if we wanted the targets to be more persistent to avoid having to recreate
the thread list during each push.  One thing I wanted to look at in more
detail is how this interaction worked for the older M:N threading targets.
FreeBSD used to use M:N threading in userland but abandoned that a while ago.
The old thread target for that used libthread_db and you only had the one
thread list, never a way to pop back down to the LWP view.

> > The best model I can think of is that this is similar to M:N
> > user-thread implementations where you have user threads multiplexed
> > onto LWPs.  In such a world (which I'm not sure many OS's use these
> > days) it would also be nice to kind of bounce between the worlds.
> 
> M:N user-thread implementations have probably become more popular with
> Go.  In that scenario we have the following layers:
> 
> * Threads == Goroutines (user-thread implementation)
> * Threads == OS threads

Hmm.

> > (In fact, the model I have been toying with but have not yet
> > implemented for adapting FreeBSD's current kernel target support to
> > qemu or the GDB stub I'm hacking on for FreeBSD's native bhyve
> > hypervisor would be to treat vCPUs as LWPs so their ptid would have
> > lwp == vcpu, and kernel-level threads as "threads", so their ptid
> > would have tid == kernel thread id).
> 
> So kernel-level threads can not be rescheduled on a different vCPU?

They definitely can.  The same is true for user-level thread with LWPs
on systems with M:N threading (e.g. scheduler activations on Solaris
or FreeBSD's old KSE M:N threading model).
  
Andreas Arnez May 22, 2017, 10:18 a.m. UTC | #19
On Fri, May 19 2017, John Baldwin wrote:

> On Friday, May 19, 2017 07:05:47 PM Andreas Arnez wrote:
>> On Fri, May 19 2017, John Baldwin wrote:
>> 
>> > FreeBSD's kernel GDB bits (which I maintain) have a similar issue, though for
>> > now we only export kernel threads as threads in GDB and don't support CPUs as
>> > a GDB-visible thing.  In some ways the model I would personally like would be
>> > to have conceptual "layers" that you can bounce up and down between kind of
>> > like a stack, but in this case a stack of thread targets, so that I could do
>> > a kind of 'thread_down' and now 'info threads' would only show me CPUs, allow
>> > me to select CPUs, etc. but then have a 'thread_up' to pop back up to the
>> > kernel thread layer.
>> 
>> Exactly!  Note that GDB already has a stack of "layers" -- the target
>> stack.  Thus I'm considering commands like "target up/down" for this
>> purpose.  Of course this requires per-target thread lists.
>
> Yes, a target up/down might work.  Right now you can push/pop targets so in
> theory you can do this today with "target push kthread" and then "target pop".
> I hadn't played with this enough to know if that would be sufficient or not
> or if we wanted the targets to be more persistent to avoid having to recreate
> the thread list during each push.  One thing I wanted to look at in more
> detail is how this interaction worked for the older M:N threading targets.
> FreeBSD used to use M:N threading in userland but abandoned that a while ago.
> The old thread target for that used libthread_db and you only had the one
> thread list, never a way to pop back down to the LWP view.

Right, it might be interesting how that interaction worked.  If you gain
any insight, please share.  From a quick glance at these targets I had
the impression that they didn't work at all with the remote target
beneath them.

--
Andreas
  

Patch

diff --git a/gdb/Makefile.in b/gdb/Makefile.in
index 0818742..9387c66 100644
--- a/gdb/Makefile.in
+++ b/gdb/Makefile.in
@@ -817,6 +817,8 @@  ALL_TARGET_OBS = \
 	iq2000-tdep.o \
 	linux-record.o \
 	linux-tdep.o \
+	lk-lists.o \
+	lk-low.o \
 	lm32-tdep.o \
 	m32c-tdep.o \
 	m32r-linux-tdep.o \
@@ -1103,6 +1105,8 @@  SFILES = \
 	jit.c \
 	language.c \
 	linespec.c \
+	lk-lists.c \
+	lk-low.c \
 	location.c \
 	m2-exp.y \
 	m2-lang.c \
@@ -1350,6 +1354,8 @@  HFILES_NO_SRCDIR = \
 	linux-nat.h \
 	linux-record.h \
 	linux-tdep.h \
+	lk-lists.h \
+	lk-low.h \
 	location.h \
 	m2-lang.h \
 	m32r-tdep.h \
@@ -2547,6 +2553,8 @@  ALLDEPFILES = \
 	linux-fork.c \
 	linux-record.c \
 	linux-tdep.c \
+	lk-lists.c \
+	lk-low.c \
 	lm32-tdep.c \
 	m32r-linux-nat.c \
 	m32r-linux-tdep.c \
diff --git a/gdb/configure.tgt b/gdb/configure.tgt
index cb909e7..8d87fea 100644
--- a/gdb/configure.tgt
+++ b/gdb/configure.tgt
@@ -34,6 +34,10 @@  case $targ in
     ;;
 esac
 
+# List of objectfiles for Linux kernel support.  To be included into *-linux*
+# targets wich support Linux kernel debugging.
+lk_target_obs="lk-lists.o lk-low.o"
+
 # map target info into gdb names.
 
 case "${targ}" in
@@ -479,7 +483,7 @@  powerpc*-*-*)
 s390*-*-linux*)
 	# Target: S390 running Linux
 	gdb_target_obs="s390-linux-tdep.o solib-svr4.o linux-tdep.o \
-			linux-record.o"
+			linux-record.o ${lk_target_obs}"
 	build_gdbserver=yes
 	;;
 
diff --git a/gdb/gdbarch.c b/gdb/gdbarch.c
index 87eafb2..5509a6c 100644
--- a/gdb/gdbarch.c
+++ b/gdb/gdbarch.c
@@ -349,6 +349,7 @@  struct gdbarch
   gdbarch_addressable_memory_unit_size_ftype *addressable_memory_unit_size;
   char ** disassembler_options;
   const disasm_options_t * valid_disassembler_options;
+  gdbarch_lk_init_private_ftype *lk_init_private;
 };
 
 /* Create a new ``struct gdbarch'' based on information provided by
@@ -1139,6 +1140,12 @@  gdbarch_dump (struct gdbarch *gdbarch, struct ui_file *file)
                       "gdbarch_dump: iterate_over_regset_sections = <%s>\n",
                       host_address_to_string (gdbarch->iterate_over_regset_sections));
   fprintf_unfiltered (file,
+                      "gdbarch_dump: gdbarch_lk_init_private_p() = %d\n",
+                      gdbarch_lk_init_private_p (gdbarch));
+  fprintf_unfiltered (file,
+                      "gdbarch_dump: lk_init_private = <%s>\n",
+                      host_address_to_string (gdbarch->lk_init_private));
+  fprintf_unfiltered (file,
                       "gdbarch_dump: long_bit = %s\n",
                       plongest (gdbarch->long_bit));
   fprintf_unfiltered (file,
@@ -5008,6 +5015,30 @@  set_gdbarch_valid_disassembler_options (struct gdbarch *gdbarch,
   gdbarch->valid_disassembler_options = valid_disassembler_options;
 }
 
+int
+gdbarch_lk_init_private_p (struct gdbarch *gdbarch)
+{
+  gdb_assert (gdbarch != NULL);
+  return gdbarch->lk_init_private != NULL;
+}
+
+void
+gdbarch_lk_init_private (struct gdbarch *gdbarch)
+{
+  gdb_assert (gdbarch != NULL);
+  gdb_assert (gdbarch->lk_init_private != NULL);
+  if (gdbarch_debug >= 2)
+    fprintf_unfiltered (gdb_stdlog, "gdbarch_lk_init_private called\n");
+  gdbarch->lk_init_private (gdbarch);
+}
+
+void
+set_gdbarch_lk_init_private (struct gdbarch *gdbarch,
+                             gdbarch_lk_init_private_ftype lk_init_private)
+{
+  gdbarch->lk_init_private = lk_init_private;
+}
+
 
 /* Keep a registry of per-architecture data-pointers required by GDB
    modules.  */
diff --git a/gdb/gdbarch.h b/gdb/gdbarch.h
index 34f82a7..c03bf00 100644
--- a/gdb/gdbarch.h
+++ b/gdb/gdbarch.h
@@ -1553,6 +1553,13 @@  extern void set_gdbarch_disassembler_options (struct gdbarch *gdbarch, char ** d
 
 extern const disasm_options_t * gdbarch_valid_disassembler_options (struct gdbarch *gdbarch);
 extern void set_gdbarch_valid_disassembler_options (struct gdbarch *gdbarch, const disasm_options_t * valid_disassembler_options);
+/* Initiate architecture dependent private data for the linux-kernel target. */
+
+extern int gdbarch_lk_init_private_p (struct gdbarch *gdbarch);
+
+typedef void (gdbarch_lk_init_private_ftype) (struct gdbarch *gdbarch);
+extern void gdbarch_lk_init_private (struct gdbarch *gdbarch);
+extern void set_gdbarch_lk_init_private (struct gdbarch *gdbarch, gdbarch_lk_init_private_ftype *lk_init_private);
 
 /* Definition for an unknown syscall, used basically in error-cases.  */
 #define UNKNOWN_SYSCALL (-1)
diff --git a/gdb/gdbarch.sh b/gdb/gdbarch.sh
index 39b1f94..cad45d1 100755
--- a/gdb/gdbarch.sh
+++ b/gdb/gdbarch.sh
@@ -1167,6 +1167,10 @@  m:int:addressable_memory_unit_size:void:::default_addressable_memory_unit_size::
 v:char **:disassembler_options:::0:0::0:pstring_ptr (gdbarch->disassembler_options)
 v:const disasm_options_t *:valid_disassembler_options:::0:0::0:host_address_to_string (gdbarch->valid_disassembler_options)
 
+# Initialize architecture dependent private data for the linux-kernel
+# target.
+M:void:lk_init_private:void:
+
 EOF
 }
 
diff --git a/gdb/lk-lists.c b/gdb/lk-lists.c
new file mode 100644
index 0000000..55d11bd
--- /dev/null
+++ b/gdb/lk-lists.c
@@ -0,0 +1,47 @@ 
+/* Iterators for internal data structures of the Linux kernel.
+
+   Copyright (C) 2016 Free Software Foundation, Inc.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include "defs.h"
+
+#include "inferior.h"
+#include "lk-lists.h"
+#include "lk-low.h"
+
+/* Returns next entry from struct list_head CURR while iterating field
+   SNAME->FNAME.  */
+
+CORE_ADDR
+lk_list_head_next (CORE_ADDR curr, const char *sname, const char *fname)
+{
+  CORE_ADDR next, next_prev;
+
+  /* We must always assume that the data we handle is corrupted.  Thus use
+     curr->next->prev == curr as sanity check.  */
+  next = lk_read_addr (curr + LK_OFFSET (list_head, next));
+  next_prev = lk_read_addr (next + LK_OFFSET (list_head, prev));
+
+  if (!curr || curr != next_prev)
+    {
+      error (_("Memory corruption detected while iterating list_head at "\
+	       "0x%s belonging to list %s->%s."),
+	     phex (curr, lk_builtin_type_size (unsigned_long)) , sname, fname);
+    }
+
+  return next;
+}
diff --git a/gdb/lk-lists.h b/gdb/lk-lists.h
new file mode 100644
index 0000000..f9c2a85
--- /dev/null
+++ b/gdb/lk-lists.h
@@ -0,0 +1,56 @@ 
+/* Iterators for internal data structures of the Linux kernel.
+
+   Copyright (C) 2016 Free Software Foundation, Inc.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#ifndef __LK_LISTS_H__
+#define __LK_LISTS_H__
+
+extern CORE_ADDR lk_list_head_next (CORE_ADDR curr, const char *sname,
+				    const char *fname);
+
+/* Iterator over field SNAME->FNAME of type struct list_head starting at
+   address START of type struct list_head.  This iterator is intended to be
+   used for lists initiated with macro LIST_HEAD (include/linux/list.h) in
+   the kernel, i.e. lists that START is a global variable of type struct
+   list_head and _not_ of type struct SNAME as the rest of the list.  Thus
+   START will not be iterated over but only be used to start/terminate the
+   iteration.  */
+
+#define lk_list_for_each(next, start, sname, fname)		\
+  for ((next) = lk_list_head_next ((start), #sname, #fname);	\
+       (next) != (start);					\
+       (next) = lk_list_head_next ((next), #sname, #fname))
+
+/* Iterator over struct SNAME linked together via field SNAME->FNAME of type
+   struct list_head starting at address START of type struct SNAME.  In
+   contrast to the iterator above, START is a "full" member of the list and
+   thus will be iterated over.  */
+
+#define lk_list_for_each_container(cont, start, sname, fname)	\
+  CORE_ADDR _next;						\
+  bool _first_loop = true;					\
+  for ((cont) = (start),					\
+       _next = (start) + LK_OFFSET (sname, fname);		\
+								\
+       (cont) != (start) || _first_loop;			\
+								\
+       _next = lk_list_head_next (_next, #sname, #fname),	\
+       (cont) = LK_CONTAINER_OF (_next, sname, fname),		\
+       _first_loop = false)
+
+#endif /* __LK_LISTS_H__ */
diff --git a/gdb/lk-low.c b/gdb/lk-low.c
new file mode 100644
index 0000000..768f228
--- /dev/null
+++ b/gdb/lk-low.c
@@ -0,0 +1,833 @@ 
+/* Basic Linux kernel support, architecture independent.
+
+   Copyright (C) 2016 Free Software Foundation, Inc.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include "defs.h"
+
+#include "block.h"
+#include "exceptions.h"
+#include "frame.h"
+#include "gdbarch.h"
+#include "gdbcore.h"
+#include "gdbthread.h"
+#include "gdbtypes.h"
+#include "inferior.h"
+#include "lk-lists.h"
+#include "lk-low.h"
+#include "objfiles.h"
+#include "observer.h"
+#include "solib.h"
+#include "target.h"
+#include "value.h"
+
+#include <algorithm>
+
+struct target_ops *linux_kernel_ops = NULL;
+
+/* Initialize a private data entry for an address, where NAME is the name
+   of the symbol, i.e. variable name in Linux, ALIAS the name used to
+   retrieve the entry from hashtab, and SILENT a flag to determine if
+   errors should be ignored.
+
+   Returns a pointer to the new entry.  In case of an error, either returns
+   NULL (SILENT = TRUE) or throws an error (SILENT = FALSE).  If SILENT = TRUE
+   the caller is responsible to check for errors.
+
+   Do not use directly, use LK_DECLARE_* macros defined in lk-low.h instead.  */
+
+struct lk_private_data *
+lk_init_addr (const char *name, const char *alias, int silent)
+{
+  struct lk_private_data *data;
+  struct bound_minimal_symbol bmsym;
+  void **new_slot;
+  void *old_slot;
+
+  if ((old_slot = lk_find (alias)) != NULL)
+    return (struct lk_private_data *) old_slot;
+
+  bmsym = lookup_minimal_symbol (name, NULL, NULL);
+
+  if (bmsym.minsym == NULL)
+    {
+      if (!silent)
+	error (_("Could not find address %s.  Aborting."), alias);
+      return NULL;
+    }
+
+  data = XCNEW (struct lk_private_data);
+  data->alias = alias;
+  data->data.addr = BMSYMBOL_VALUE_ADDRESS (bmsym);
+
+  new_slot = lk_find_slot (alias);
+  *new_slot = data;
+
+  return data;
+}
+
+/* Same as lk_init_addr but for structs.  */
+
+struct lk_private_data *
+lk_init_struct (const char *name, const char *alias, int silent)
+{
+  struct lk_private_data *data;
+  const struct block *global;
+  const struct symbol *sym;
+  struct type *type;
+  void **new_slot;
+  void *old_slot;
+
+  if ((old_slot = lk_find (alias)) != NULL)
+    return (struct lk_private_data *) old_slot;
+
+  global = block_global_block(get_selected_block (0));
+  sym = lookup_symbol (name, global, STRUCT_DOMAIN, NULL).symbol;
+
+  if (sym != NULL)
+    {
+      type = SYMBOL_TYPE (sym);
+      goto out;
+    }
+
+  /*  Chek for "typedef struct { ... } name;"-like definitions.  */
+  sym = lookup_symbol (name, global, VAR_DOMAIN, NULL).symbol;
+  if (sym == NULL)
+    goto error;
+
+  type = check_typedef (SYMBOL_TYPE (sym));
+
+  if (TYPE_CODE (type) == TYPE_CODE_STRUCT)
+    goto out;
+
+error:
+  if (!silent)
+    error (_("Could not find %s.  Aborting."), alias);
+
+  return NULL;
+
+out:
+  data = XCNEW (struct lk_private_data);
+  data->alias = alias;
+  data->data.type = type;
+
+  new_slot = lk_find_slot (alias);
+  *new_slot = data;
+
+  return data;
+}
+
+/* Nearly the same as lk_init_addr, with the difference that two names are
+   needed, i.e. the struct name S_NAME containing the field with name
+   F_NAME.  */
+
+struct lk_private_data *
+lk_init_field (const char *s_name, const char *f_name,
+	       const char *s_alias, const char *f_alias,
+	       int silent)
+{
+  struct lk_private_data *data;
+  struct lk_private_data *parent;
+  struct field *first, *last, *field;
+  void **new_slot;
+  void *old_slot;
+
+  if ((old_slot = lk_find (f_alias)) != NULL)
+    return (struct lk_private_data *) old_slot;
+
+  parent = lk_find (s_alias);
+  if (parent == NULL)
+    {
+      parent = lk_init_struct (s_name, s_alias, silent);
+
+      /* Only SILENT == true needed, as otherwise lk_init_struct would throw
+	 an error.  */
+      if (parent == NULL)
+	return NULL;
+    }
+
+  first = TYPE_FIELDS (parent->data.type);
+  last = first + TYPE_NFIELDS (parent->data.type);
+  for (field = first; field < last; field ++)
+    {
+      if (streq (field->name, f_name))
+	break;
+    }
+
+  if (field == last)
+    {
+      if (!silent)
+	error (_("Could not find field %s->%s.  Aborting."), s_alias, f_name);
+      return NULL;
+    }
+
+  data = XCNEW (struct lk_private_data);
+  data->alias = f_alias;
+  data->data.field = field;
+
+  new_slot = lk_find_slot (f_alias);
+  *new_slot = data;
+
+  return data;
+}
+
+/* Map cpu number CPU to the original PTID from target beneath.  */
+
+static ptid_t
+lk_cpu_to_old_ptid (const int cpu)
+{
+  struct lk_ptid_map *ptid_map;
+
+  for (ptid_map = LK_PRIVATE->old_ptid; ptid_map;
+       ptid_map = ptid_map->next)
+    {
+      if (ptid_map->cpu == cpu)
+	return ptid_map->old_ptid;
+    }
+
+  error (_("Could not map CPU %d to original PTID.  Aborting."), cpu);
+}
+
+/* Helper functions to read and return basic types at a given ADDRess.  */
+
+/* Read and return the integer value at address ADDR.  */
+
+int
+lk_read_int (CORE_ADDR addr)
+{
+  size_t int_size = lk_builtin_type_size (int);
+  enum bfd_endian endian = gdbarch_byte_order (current_inferior ()->gdbarch);
+  return read_memory_integer (addr, int_size, endian);
+}
+
+/* Read and return the unsigned integer value at address ADDR.  */
+
+unsigned int
+lk_read_uint (CORE_ADDR addr)
+{
+  size_t uint_size = lk_builtin_type_size (unsigned_int);
+  enum bfd_endian endian = gdbarch_byte_order (current_inferior ()->gdbarch);
+  return read_memory_integer (addr, uint_size, endian);
+}
+
+/* Read and return the long integer value at address ADDR.  */
+
+LONGEST
+lk_read_long (CORE_ADDR addr)
+{
+  size_t long_size = lk_builtin_type_size (long);
+  enum bfd_endian endian = gdbarch_byte_order (current_inferior ()->gdbarch);
+  return read_memory_integer (addr, long_size, endian);
+}
+
+/* Read and return the unsigned long integer value at address ADDR.  */
+
+ULONGEST
+lk_read_ulong (CORE_ADDR addr)
+{
+  size_t ulong_size = lk_builtin_type_size (unsigned_long);
+  enum bfd_endian endian = gdbarch_byte_order (current_inferior ()->gdbarch);
+  return read_memory_unsigned_integer (addr, ulong_size, endian);
+}
+
+/* Read and return the address value at address ADDR.  */
+
+CORE_ADDR
+lk_read_addr (CORE_ADDR addr)
+{
+  return (CORE_ADDR) lk_read_ulong (addr);
+}
+
+/* Reads a bitmap at a given ADDRess of size SIZE (in bits). Allocates and
+   returns an array of ulongs.  The caller is responsible to free the array
+   after it is no longer needed.  */
+
+ULONGEST *
+lk_read_bitmap (CORE_ADDR addr, size_t size)
+{
+  ULONGEST *bitmap;
+  size_t ulong_size, len;
+
+  ulong_size = lk_builtin_type_size (unsigned_long);
+  len = LK_DIV_ROUND_UP (size, ulong_size * LK_BITS_PER_BYTE);
+  bitmap = XNEWVEC (ULONGEST, len);
+
+  for (size_t i = 0; i < len; i++)
+    bitmap[i] = lk_read_ulong (addr + i * ulong_size);
+
+  return bitmap;
+}
+
+/* Return the next set bit in bitmap BITMAP of size SIZE (in bits)
+   starting from bit (index) BIT.  Return SIZE when the end of the bitmap
+   was reached.  To iterate over all set bits use macro
+   LK_BITMAP_FOR_EACH_SET_BIT defined in lk-low.h.  */
+
+size_t
+lk_bitmap_find_next_bit (ULONGEST *bitmap, size_t size, size_t bit)
+{
+  size_t ulong_size, bits_per_ulong, elt;
+
+  ulong_size = lk_builtin_type_size (unsigned_long);
+  bits_per_ulong = ulong_size * LK_BITS_PER_BYTE;
+  elt = bit / bits_per_ulong;
+
+  while (bit < size)
+    {
+      /* FIXME: Explain why using lsb0 bit order.  */
+      if (bitmap[elt] & (1UL << (bit % bits_per_ulong)))
+	return bit;
+
+      bit++;
+      if (bit % bits_per_ulong == 0)
+	elt++;
+    }
+
+  return size;
+}
+
+/* Returns the Hamming weight, i.e. number of set bits, of bitmap BITMAP
+   with size SIZE (in bits).  */
+
+size_t
+lk_bitmap_hweight (ULONGEST *bitmap, size_t size)
+{
+  size_t ulong_size, bit, bits_per_ulong, elt, retval;
+
+  ulong_size = lk_builtin_type_size (unsigned_long);
+  bits_per_ulong = ulong_size * LK_BITS_PER_BYTE;
+  elt = bit = 0;
+  retval = 0;
+
+  while (bit < size)
+    {
+      if (bitmap[elt] & (1 << bit % bits_per_ulong))
+	retval++;
+
+      bit++;
+      if (bit % bits_per_ulong == 0)
+	elt++;
+    }
+
+  return retval;
+}
+
+/* Provide the per_cpu_offset of cpu CPU.  See comment in lk-low.h for
+   details.  */
+
+CORE_ADDR
+lk_get_percpu_offset (unsigned int cpu)
+{
+  size_t ulong_size = lk_builtin_type_size (unsigned_long);
+  CORE_ADDR percpu_elt;
+
+  /* Give the architecture a chance to overwrite default behaviour.  */
+  if (LK_HOOK->get_percpu_offset)
+      return LK_HOOK->get_percpu_offset (cpu);
+
+  percpu_elt = LK_ADDR (__per_cpu_offset) + (ulong_size * cpu);
+  return lk_read_addr (percpu_elt);
+}
+
+
+/* Test if a given task TASK is running.  See comment in lk-low.h for
+   details.  */
+
+unsigned int
+lk_task_running (CORE_ADDR task)
+{
+  ULONGEST *cpu_online_mask;
+  size_t size;
+  unsigned int cpu;
+  struct cleanup *old_chain;
+
+  size = LK_BITMAP_SIZE (cpumask);
+  cpu_online_mask = lk_read_bitmap (LK_ADDR (cpu_online_mask), size);
+  old_chain = make_cleanup (xfree, cpu_online_mask);
+
+  LK_BITMAP_FOR_EACH_SET_BIT (cpu_online_mask, size, cpu)
+    {
+      CORE_ADDR rq;
+      CORE_ADDR curr;
+
+      rq = LK_ADDR (runqueues) + lk_get_percpu_offset (cpu);
+      curr = lk_read_addr (rq + LK_OFFSET (rq, curr));
+
+      if (curr == task)
+	break;
+    }
+
+  if (cpu == size)
+    cpu = LK_CPU_INVAL;
+
+  do_cleanups (old_chain);
+  return cpu;
+}
+
+/* Update running tasks with information from struct rq->curr. */
+
+static void
+lk_update_running_tasks ()
+{
+  ULONGEST *cpu_online_mask;
+  size_t size;
+  unsigned int cpu;
+  struct cleanup *old_chain;
+
+  size = LK_BITMAP_SIZE (cpumask);
+  cpu_online_mask = lk_read_bitmap (LK_ADDR (cpu_online_mask), size);
+  old_chain = make_cleanup (xfree, cpu_online_mask);
+
+  LK_BITMAP_FOR_EACH_SET_BIT (cpu_online_mask, size, cpu)
+    {
+      struct thread_info *tp;
+      CORE_ADDR rq, curr;
+      LONGEST pid, inf_pid;
+      ptid_t new_ptid, old_ptid;
+
+      rq = LK_ADDR (runqueues) + lk_get_percpu_offset (cpu);
+      curr = lk_read_addr (rq + LK_OFFSET (rq, curr));
+      pid = lk_read_int (curr + LK_OFFSET (task_struct, pid));
+      inf_pid = current_inferior ()->pid;
+
+      new_ptid = ptid_build (inf_pid, pid, curr);
+      old_ptid = lk_cpu_to_old_ptid (cpu); /* FIXME not suitable for
+					      running targets? */
+
+      tp = find_thread_ptid (old_ptid);
+      if (tp && tp->state != THREAD_EXITED)
+	thread_change_ptid (old_ptid, new_ptid);
+    }
+  do_cleanups (old_chain);
+}
+
+/* Update sleeping tasks by walking the task_structs starting from
+   init_task.  */
+
+static void
+lk_update_sleeping_tasks ()
+{
+  CORE_ADDR init_task, task, thread;
+  int inf_pid;
+
+  inf_pid = current_inferior ()->pid;
+  init_task = LK_ADDR (init_task);
+
+  lk_list_for_each_container (task, init_task, task_struct, tasks)
+    {
+      lk_list_for_each_container (thread, task, task_struct, thread_group)
+	{
+	  int pid;
+	  ptid_t ptid;
+	  struct thread_info *tp;
+
+	  pid = lk_read_int (thread + LK_OFFSET (task_struct, pid));
+	  ptid = ptid_build (inf_pid, pid, thread);
+
+	  tp = find_thread_ptid (ptid);
+	  if (tp == NULL || tp->state == THREAD_EXITED)
+	    add_thread (ptid);
+	}
+    }
+}
+
+/* Function for targets to_update_thread_list hook.  */
+
+static void
+lk_update_thread_list (struct target_ops *target)
+{
+  prune_threads ();
+  lk_update_running_tasks ();
+  lk_update_sleeping_tasks ();
+}
+
+/* Function for targets to_fetch_registers hook.  */
+
+static void
+lk_fetch_registers (struct target_ops *target,
+		    struct regcache *regcache, int regnum)
+{
+  CORE_ADDR task;
+  unsigned int cpu;
+
+  task = (CORE_ADDR) ptid_get_tid (regcache_get_ptid (regcache));
+  cpu = lk_task_running (task);
+
+  /* Let the target beneath fetch registers of running tasks.  */
+  if (cpu != LK_CPU_INVAL)
+    {
+      struct cleanup *old_inferior_ptid;
+
+      old_inferior_ptid = save_inferior_ptid ();
+      inferior_ptid = lk_cpu_to_old_ptid (cpu);
+      linux_kernel_ops->beneath->to_fetch_registers (target, regcache, regnum);
+      do_cleanups (old_inferior_ptid);
+    }
+  else
+    {
+      struct gdbarch *gdbarch;
+      unsigned int i;
+
+      LK_HOOK->get_registers (task, target, regcache, regnum);
+
+      /* Mark all registers not found as unavailable.  */
+      gdbarch = get_regcache_arch (regcache);
+      for (i = 0; i < gdbarch_num_regs (gdbarch); i++)
+	{
+	  if (regcache_register_status (regcache, i) == REG_UNKNOWN)
+	    regcache_raw_supply (regcache, i, NULL);
+	}
+    }
+}
+
+/* Function for targets to_pid_to_str hook.  Marks running tasks with an
+   asterisk "*".  */
+
+static char *
+lk_pid_to_str (struct target_ops *target, ptid_t ptid)
+{
+  static char buf[64];
+  long pid;
+  CORE_ADDR task;
+
+  pid = ptid_get_lwp (ptid);
+  task = (CORE_ADDR) ptid_get_tid (ptid);
+
+  xsnprintf (buf, sizeof (buf), "PID: %5li%s, 0x%s",
+	     pid, ((lk_task_running (task) != LK_CPU_INVAL) ? "*" : ""),
+	     phex (task, lk_builtin_type_size (unsigned_long)));
+
+  return buf;
+}
+
+/* Function for targets to_thread_name hook.  */
+
+static const char *
+lk_thread_name (struct target_ops *target, struct thread_info *ti)
+{
+  static char buf[LK_TASK_COMM_LEN + 1];
+  char tmp[LK_TASK_COMM_LEN + 1];
+  CORE_ADDR task, comm;
+  size_t size;
+
+  size = std::min ((unsigned int) LK_TASK_COMM_LEN,
+		   LK_ARRAY_LEN(LK_FIELD (task_struct, comm)));
+
+  task = (CORE_ADDR) ptid_get_tid (ti->ptid);
+  comm = task + LK_OFFSET (task_struct, comm);
+  read_memory (comm, (gdb_byte *) tmp, size);
+
+  xsnprintf (buf, sizeof (buf), "%-16s", tmp);
+
+  return buf;
+}
+
+/* Functions to initialize and free target_ops and its private data.  As well
+   as functions for targets to_open/close/detach hooks.  */
+
+/* Check if OBFFILE is a Linux kernel.  */
+
+static int
+lk_is_linux_kernel (struct objfile *objfile)
+{
+  int ok = 0;
+
+  if (objfile == NULL || !(objfile->flags & OBJF_MAINLINE))
+    return 0;
+
+  ok += lookup_minimal_symbol ("linux_banner", NULL, objfile).minsym != NULL;
+  ok += lookup_minimal_symbol ("_stext", NULL, objfile).minsym != NULL;
+  ok += lookup_minimal_symbol ("_etext", NULL, objfile).minsym != NULL;
+
+  return (ok > 2);
+}
+
+/* Initialize struct lk_private.  */
+
+static void
+lk_init_private ()
+{
+  linux_kernel_ops->to_data = XCNEW (struct lk_private);
+  LK_PRIVATE->hooks = XCNEW (struct lk_private_hooks);
+  LK_PRIVATE->data = htab_create_alloc (31, (htab_hash) lk_hash_private_data,
+					(htab_eq) lk_private_data_eq, NULL,
+					xcalloc, xfree);
+}
+
+/* Initialize architecture independent private data.  Must be called
+   _after_ symbol tables were initialized.  */
+
+static void
+lk_init_private_data ()
+{
+  if (LK_PRIVATE->data != NULL)
+    htab_empty (LK_PRIVATE->data);
+
+  LK_DECLARE_FIELD (task_struct, tasks);
+  LK_DECLARE_FIELD (task_struct, pid);
+  LK_DECLARE_FIELD (task_struct, tgid);
+  LK_DECLARE_FIELD (task_struct, thread_group);
+  LK_DECLARE_FIELD (task_struct, comm);
+  LK_DECLARE_FIELD (task_struct, thread);
+
+  LK_DECLARE_FIELD (list_head, next);
+  LK_DECLARE_FIELD (list_head, prev);
+
+  LK_DECLARE_FIELD (rq, curr);
+
+  LK_DECLARE_FIELD (cpumask, bits);
+
+  LK_DECLARE_ADDR (init_task);
+  LK_DECLARE_ADDR (runqueues);
+  LK_DECLARE_ADDR (__per_cpu_offset);
+  LK_DECLARE_ADDR (init_mm);
+
+  LK_DECLARE_ADDR_ALIAS (__cpu_online_mask, cpu_online_mask);	/* linux 4.5+ */
+  LK_DECLARE_ADDR_ALIAS (cpu_online_bits, cpu_online_mask);	/* linux -4.4 */
+  if (LK_ADDR (cpu_online_mask) == -1)
+    error (_("Could not find address cpu_online_mask.  Aborting."));
+}
+
+/* Frees the cpu to old ptid map.  */
+
+static void
+lk_free_ptid_map ()
+{
+  while (LK_PRIVATE->old_ptid)
+    {
+      struct lk_ptid_map *tmp;
+
+      tmp = LK_PRIVATE->old_ptid;
+      LK_PRIVATE->old_ptid = tmp->next;
+      XDELETE (tmp);
+    }
+}
+
+/* Initialize the cpu to old ptid map.  Prefer the arch dependent
+   map_running_task_to_cpu hook if provided, else assume that the PID used
+   by target beneath is the same as in task_struct PID task_struct.  See
+   comment on lk_ptid_map in lk-low.h for details.  */
+
+static void
+lk_init_ptid_map ()
+{
+  struct thread_info *ti;
+  ULONGEST *cpu_online_mask;
+  size_t size;
+  unsigned int cpu;
+  struct cleanup *old_chain;
+
+  if (LK_PRIVATE->old_ptid != NULL)
+    lk_free_ptid_map ();
+
+  size = LK_BITMAP_SIZE (cpumask);
+  cpu_online_mask = lk_read_bitmap (LK_ADDR (cpu_online_mask), size);
+  old_chain = make_cleanup (xfree, cpu_online_mask);
+
+  ALL_THREADS (ti)
+    {
+      struct lk_ptid_map *ptid_map = XCNEW (struct lk_ptid_map);
+      CORE_ADDR rq, curr;
+      int pid;
+
+      /* Give the architecture a chance to overwrite default behaviour.  */
+      if (LK_HOOK->map_running_task_to_cpu)
+	{
+	  ptid_map->cpu = LK_HOOK->map_running_task_to_cpu (ti);
+	}
+      else
+	{
+	  LK_BITMAP_FOR_EACH_SET_BIT (cpu_online_mask, size, cpu)
+	    {
+	      rq = LK_ADDR (runqueues) + lk_get_percpu_offset (cpu);
+	      curr = lk_read_addr (rq + LK_OFFSET (rq, curr));
+	      pid = lk_read_int (curr + LK_OFFSET (task_struct, pid));
+
+	      if (pid == ptid_get_lwp (ti->ptid))
+		{
+		  ptid_map->cpu = cpu;
+		  break;
+		}
+	    }
+	  if (cpu == size)
+	    error (_("Could not map thread with pid %d, lwp %lu to a cpu."),
+		   ti->ptid.pid, ti->ptid.lwp);
+	}
+      ptid_map->old_ptid = ti->ptid;
+      ptid_map->next = LK_PRIVATE->old_ptid;
+      LK_PRIVATE->old_ptid = ptid_map;
+    }
+
+  do_cleanups (old_chain);
+}
+
+/* Initializes all private data and pushes the linux kernel target, if not
+   already done.  */
+
+static void
+lk_try_push_target ()
+{
+  struct gdbarch *gdbarch;
+
+  gdbarch = current_inferior ()->gdbarch;
+  if (!(gdbarch && gdbarch_lk_init_private_p (gdbarch)))
+    error (_("Linux kernel debugging not supported on %s."),
+	   gdbarch_bfd_arch_info (gdbarch)->printable_name);
+
+  lk_init_private ();
+  lk_init_private_data ();
+  gdbarch_lk_init_private (gdbarch);
+  /* Check for required arch hooks.  */
+  gdb_assert (LK_HOOK->get_registers);
+
+  lk_init_ptid_map ();
+  lk_update_thread_list (linux_kernel_ops);
+
+  if (!target_is_pushed (linux_kernel_ops))
+    push_target (linux_kernel_ops);
+}
+
+/* Function for targets to_open hook.  */
+
+static void
+lk_open (const char *args, int from_tty)
+{
+  struct objfile *objfile;
+
+  if (target_is_pushed (linux_kernel_ops))
+    {
+      printf_unfiltered (_("Linux kernel target already pushed.  Aborting\n"));
+      return;
+    }
+
+  for (objfile = current_program_space->objfiles; objfile;
+       objfile = objfile->next)
+    {
+      if (lk_is_linux_kernel (objfile)
+	  && ptid_get_pid (inferior_ptid) != 0)
+	{
+	  lk_try_push_target ();
+	  return;
+	}
+    }
+  printf_unfiltered (_("Could not find a valid Linux kernel object file.  "
+		       "Aborting.\n"));
+}
+
+/* Function for targets to_close hook.  Deletes all private data.  */
+
+static void
+lk_close (struct target_ops *ops)
+{
+  htab_delete (LK_PRIVATE->data);
+  lk_free_ptid_map ();
+  XDELETE (LK_PRIVATE->hooks);
+
+  XDELETE (LK_PRIVATE);
+  linux_kernel_ops->to_data = NULL;
+}
+
+/* Function for targets to_detach hook.  */
+
+static void
+lk_detach (struct target_ops *t, const char *args, int from_tty)
+{
+  struct target_ops *beneath = linux_kernel_ops->beneath;
+
+  unpush_target (linux_kernel_ops);
+  reinit_frame_cache ();
+  if (from_tty)
+    printf_filtered (_("Linux kernel target detached.\n"));
+
+  beneath->to_detach (beneath, args, from_tty);
+}
+
+/* Function for new objfile observer.  */
+
+static void
+lk_observer_new_objfile (struct objfile *objfile)
+{
+  if (lk_is_linux_kernel (objfile)
+      && ptid_get_pid (inferior_ptid) != 0)
+    lk_try_push_target ();
+}
+
+/* Function for inferior created observer.  */
+
+static void
+lk_observer_inferior_created (struct target_ops *ops, int from_tty)
+{
+  struct objfile *objfile;
+
+  if (ptid_get_pid (inferior_ptid) == 0)
+    return;
+
+  for (objfile = current_inferior ()->pspace->objfiles; objfile;
+       objfile = objfile->next)
+    {
+      if (lk_is_linux_kernel (objfile))
+	{
+	  lk_try_push_target ();
+	  return;
+	}
+    }
+}
+
+/* Initialize linux kernel target.  */
+
+static void
+init_linux_kernel_ops (void)
+{
+  struct target_ops *t;
+
+  if (linux_kernel_ops != NULL)
+    return;
+
+  t = XCNEW (struct target_ops);
+  t->to_shortname = "linux-kernel";
+  t->to_longname = "linux kernel support";
+  t->to_doc = "Adds support to debug the Linux kernel";
+
+  /* set t->to_data = struct lk_private in lk_init_private.  */
+
+  t->to_open = lk_open;
+  t->to_close = lk_close;
+  t->to_detach = lk_detach;
+  t->to_fetch_registers = lk_fetch_registers;
+  t->to_update_thread_list = lk_update_thread_list;
+  t->to_pid_to_str = lk_pid_to_str;
+  t->to_thread_name = lk_thread_name;
+
+  t->to_stratum = thread_stratum;
+  t->to_magic = OPS_MAGIC;
+
+  linux_kernel_ops = t;
+
+  add_target (t);
+}
+
+/* Provide a prototype to silence -Wmissing-prototypes.  */
+extern initialize_file_ftype _initialize_linux_kernel;
+
+void
+_initialize_linux_kernel (void)
+{
+  init_linux_kernel_ops ();
+
+  observer_attach_new_objfile (lk_observer_new_objfile);
+  observer_attach_inferior_created (lk_observer_inferior_created);
+}
diff --git a/gdb/lk-low.h b/gdb/lk-low.h
new file mode 100644
index 0000000..292ef97
--- /dev/null
+++ b/gdb/lk-low.h
@@ -0,0 +1,310 @@ 
+/* Basic Linux kernel support, architecture independent.
+
+   Copyright (C) 2016 Free Software Foundation, Inc.
+
+   This file is part of GDB.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#ifndef __LK_LOW_H__
+#define __LK_LOW_H__
+
+#include "target.h"
+
+extern struct target_ops *linux_kernel_ops;
+
+/* Copy constants defined in Linux kernel.  */
+#define LK_TASK_COMM_LEN 16
+#define LK_BITS_PER_BYTE 8
+
+/* Definitions used in linux kernel target.  */
+#define LK_CPU_INVAL -1U
+
+/* Private data structs for this target.  */
+/* Forward declarations.  */
+struct lk_private_hooks;
+struct lk_ptid_map;
+
+/* Short hand access to private data.  */
+#define LK_PRIVATE ((struct lk_private *) linux_kernel_ops->to_data)
+#define LK_HOOK (LK_PRIVATE->hooks)
+
+struct lk_private
+{
+  /* Hashtab for needed addresses, structs and fields.  */
+  htab_t data;
+
+  /* Linked list to map between cpu number and original ptid from target
+     beneath.  */
+  struct lk_ptid_map *old_ptid;
+
+  /* Hooks for architecture dependent functions.  */
+  struct lk_private_hooks *hooks;
+};
+
+/* We use the following convention for PTIDs:
+
+   ptid->pid = inferiors PID
+   ptid->lwp = PID from task_stuct
+   ptid->tid = address of task_struct
+
+   The task_structs address as TID has two reasons.  First, we need it quite
+   often and there is no other reasonable way to pass it down.  Second, it
+   helps us to distinguish swapper tasks as they all have PID = 0.
+
+   Furthermore we cannot rely on the target beneath to use the same PID as the
+   task_struct. Thus we need a mapping between our PTID and the PTID of the
+   target beneath. Otherwise it is impossible to pass jobs, e.g. fetching
+   registers of running tasks, to the target beneath.  */
+
+/* Private data struct to map between our and the target beneath PTID.  */
+
+struct lk_ptid_map
+{
+  struct lk_ptid_map *next;
+  unsigned int cpu;
+  ptid_t old_ptid;
+};
+
+/* Private data struct to be stored in hashtab.  */
+
+struct lk_private_data
+{
+  const char *alias;
+
+  union
+  {
+    CORE_ADDR addr;
+    struct type *type;
+    struct field *field;
+  } data;
+};
+
+/* Wrapper for htab_hash_string to work with our private data.  */
+
+static inline hashval_t
+lk_hash_private_data (const struct lk_private_data *entry)
+{
+  return htab_hash_string (entry->alias);
+}
+
+/* Function for htab_eq to work with our private data.  */
+
+static inline int
+lk_private_data_eq (const struct lk_private_data *entry,
+		    const struct lk_private_data *element)
+{
+  return streq (entry->alias, element->alias);
+}
+
+/* Wrapper for htab_find_slot to work with our private data.  Do not use
+   directly, use the macros below instead.  */
+
+static inline void **
+lk_find_slot (const char *alias)
+{
+  const struct lk_private_data dummy = { alias };
+  return htab_find_slot (LK_PRIVATE->data, &dummy, INSERT);
+}
+
+/* Wrapper for htab_find to work with our private data.  Do not use
+   directly, use the macros below instead.  */
+
+static inline struct lk_private_data *
+lk_find (const char *alias)
+{
+  const struct lk_private_data dummy = { alias };
+  return (struct lk_private_data *) htab_find (LK_PRIVATE->data, &dummy);
+}
+
+/* Functions to initialize private data.  Do not use directly, use the
+   macros below instead.  */
+
+extern struct lk_private_data *lk_init_addr (const char *name,
+					     const char *alias, int silent);
+extern struct lk_private_data *lk_init_struct (const char *name,
+					       const char *alias, int silent);
+extern struct lk_private_data *lk_init_field (const char *s_name,
+					      const char *f_name,
+					      const char *s_alias,
+					      const char *f_alias, int silent);
+
+/* The names we use to store our private data in the hashtab.  */
+
+#define LK_STRUCT_ALIAS(s_name) ("struct " #s_name)
+#define LK_FIELD_ALIAS(s_name, f_name) (#s_name " " #f_name)
+
+/* Macros to initiate addresses and fields, where (S_/F_)NAME is the variables
+   name as used in Linux.  LK_DECLARE_FIELD also initializes the corresponding
+   struct entry.  Throws an error, if no symbol with the given name is found.
+ */
+
+#define LK_DECLARE_ADDR(name) \
+  lk_init_addr (#name, #name, 0)
+#define LK_DECLARE_FIELD(s_name, f_name) \
+  lk_init_field (#s_name, #f_name, LK_STRUCT_ALIAS (s_name), \
+		 LK_FIELD_ALIAS (s_name, f_name), 0)
+
+/* Same as LK_DECLARE_*, but returns NULL instead of throwing an error if no
+   symbol was found.  The caller is responsible to check for possible errors.
+ */
+
+#define LK_DECLARE_ADDR_SILENT(name) \
+  lk_init_addr (#name, #name, 1)
+#define LK_DECLARE_FIELD_SILENT(s_name, f_name) \
+  lk_init_field (#s_name, #f_name, LK_STRUCT_ALIAS (s_name), \
+		 LK_FIELD_ALIAS (s_name, f_name), 1)
+
+/* Same as LK_DECLARE_*_SILENT, but allows you to give an ALIAS name.  If used
+   for a struct, the struct has to be declared explicitly _before_ any of its
+   fields.  They are ment to be used, when a variable in the kernel was simply
+   renamed (at least from our point of view).  The caller is responsible to
+   check for possible errors.  */
+
+#define LK_DECLARE_ADDR_ALIAS(name, alias) \
+  lk_init_addr (#name, #alias, 1)
+#define LK_DECLARE_STRUCT_ALIAS(s_name, alias) \
+  lk_init_struct (#s_name, LK_STRUCT_ALIAS (alias), 1)
+#define LK_DECLARE_FIELD_ALIAS(s_alias, f_name, f_alias) \
+  lk_init_field (NULL, #f_name, LK_STRUCT_ALIAS (s_alias), \
+		 LK_FIELD_ALIAS (s_alias, f_alias), 1)
+
+/* Macros to retrieve private data from hashtab. Returns NULL (-1) if no entry
+   with the given ALIAS exists. The caller only needs to check for possible
+   errors if not done so at initialization.  */
+
+#define LK_ADDR(alias) \
+  (lk_find (#alias) ? (lk_find (#alias))->data.addr : -1)
+#define LK_STRUCT(alias) \
+  (lk_find (LK_STRUCT_ALIAS (alias)) \
+   ? (lk_find (LK_STRUCT_ALIAS (alias)))->data.type \
+   : NULL)
+#define LK_FIELD(s_alias, f_alias) \
+  (lk_find (LK_FIELD_ALIAS (s_alias, f_alias)) \
+   ? (lk_find (LK_FIELD_ALIAS (s_alias, f_alias)))->data.field \
+   : NULL)
+
+
+/* Definitions for architecture dependent hooks.  */
+/* Hook to read registers from the target and supply their content
+   to the regcache.  */
+typedef void (*lk_hook_get_registers) (CORE_ADDR task,
+				       struct target_ops *target,
+				       struct regcache *regcache,
+				       int regnum);
+
+/* Hook to return the per_cpu_offset of cpu CPU.  Only architectures that
+   do not use the __per_cpu_offset array to determin the offset have to
+   supply this hook.  */
+typedef CORE_ADDR (*lk_hook_get_percpu_offset) (unsigned int cpu);
+
+/* Hook to map a running task to a logical CPU.  Required if the target
+   beneath uses a different PID as struct rq.  */
+typedef unsigned int (*lk_hook_map_running_task_to_cpu) (struct thread_info *ti);
+
+struct lk_private_hooks
+{
+  /* required */
+  lk_hook_get_registers get_registers;
+
+  /* optional, required if __per_cpu_offset array is not used to determine
+     offset.  */
+  lk_hook_get_percpu_offset get_percpu_offset;
+
+  /* optional, required if the target beneath uses a different PID as struct
+     rq.  */
+  lk_hook_map_running_task_to_cpu map_running_task_to_cpu;
+};
+
+/* Helper functions to read and return a value at a given ADDRess.  */
+extern int lk_read_int (CORE_ADDR addr);
+extern unsigned int lk_read_uint (CORE_ADDR addr);
+extern LONGEST lk_read_long (CORE_ADDR addr);
+extern ULONGEST lk_read_ulong (CORE_ADDR addr);
+extern CORE_ADDR lk_read_addr (CORE_ADDR addr);
+
+/* Reads a bitmap at a given ADDRess of size SIZE (in bits). Allocates and
+   returns an array of ulongs.  The caller is responsible to free the array
+   after it is no longer needed.  */
+extern ULONGEST *lk_read_bitmap (CORE_ADDR addr, size_t size);
+
+/* Walks the bitmap BITMAP of size SIZE from bit (index) BIT.
+   Returns the index of the next set bit or SIZE, when the end of the bitmap
+   was reached.  To iterate over all set bits use macro
+   LK_BITMAP_FOR_EACH_SET_BIT defined below.  */
+extern size_t lk_bitmap_find_next_bit (ULONGEST *bitmap, size_t bit,
+				       size_t size);
+#define LK_BITMAP_FOR_EACH_SET_BIT(bitmap, size, bit)			\
+  for ((bit) = lk_bitmap_find_next_bit ((bitmap), (size), 0);		\
+       (bit) < (size);							\
+       (bit) = lk_bitmap_find_next_bit ((bitmap), (size), (bit) + 1))
+
+/* Returns the size of BITMAP in bits.  */
+#define LK_BITMAP_SIZE(bitmap) \
+  (FIELD_SIZE (LK_FIELD (bitmap, bits)) * LK_BITS_PER_BYTE)
+
+/* Returns the Hamming weight, i.e. number of set bits, of bitmap BITMAP with
+   size SIZE (in bits).  */
+extern size_t lk_bitmap_hweight (ULONGEST *bitmap, size_t size);
+
+
+/* Short hand access to current gdbarchs builtin types and their
+   size (in byte).  For TYPE replace spaces " " by underscore "_", e.g.
+   "unsigned int" => "unsigned_int".  */
+#define lk_builtin_type(type)					\
+  (builtin_type (current_inferior ()->gdbarch)->builtin_##type)
+#define lk_builtin_type_size(type)		\
+  (lk_builtin_type (type)->length)
+
+/* If field FIELD is an array returns its length (in #elements).  */
+#define LK_ARRAY_LEN(field)			\
+  (FIELD_SIZE (field) / FIELD_TARGET_SIZE (field))
+
+/* Short hand access to the offset of field F_NAME in struct S_NAME.  */
+#define LK_OFFSET(s_name, f_name)		\
+  (FIELD_OFFSET (LK_FIELD (s_name, f_name)))
+
+/* Returns the container of field FNAME of struct SNAME located at address
+   ADDR.  */
+#define LK_CONTAINER_OF(addr, sname, fname)		\
+  ((addr) - LK_OFFSET (sname, fname))
+
+/* Divides numinator N by demoniator D and rounds up the result.  */
+#define LK_DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d))
+
+
+/* Additional access macros to fields in the style of gdbtypes.h */
+/* Returns the size of field FIELD (in bytes). If FIELD is an array returns
+   the size of the whole array.  */
+#define FIELD_SIZE(field)			\
+  TYPE_LENGTH (check_typedef (FIELD_TYPE (*field)))
+
+/* Returns the size of the target type of field FIELD (in bytes).  If FIELD is
+   an array returns the size of its elements.  */
+#define FIELD_TARGET_SIZE(field)		\
+  TYPE_LENGTH (check_typedef (TYPE_TARGET_TYPE (FIELD_TYPE (*field))))
+
+/* Returns the offset of field FIELD (in bytes).  */
+#define FIELD_OFFSET(field)			\
+  (FIELD_BITPOS (*field) / TARGET_CHAR_BIT)
+
+/* Provides the per_cpu_offset of cpu CPU.  If the architecture
+   provides a get_percpu_offset hook, the call is passed to it.  Otherwise
+   returns the __per_cpu_offset[CPU] element.  */
+extern CORE_ADDR lk_get_percpu_offset (unsigned int cpu);
+
+/* Tests if a given task TASK is running. Returns either the cpu-id
+   if running or LK_CPU_INVAL if not.  */
+extern unsigned int lk_task_running (CORE_ADDR task);
+#endif /* __LK_LOW_H__ */