[2/2] gdb/amdgpu: Fix debugging multiple inferiors using the ROCm runtime

Message ID 20230630145755.6500-3-lancelot.six@amd.com
State New
Headers
Series Fix debugging multi inferiors using the ROCm runtime |

Commit Message

Lancelot SIX June 30, 2023, 2:57 p.m. UTC
  When debugging a multi-process application where a parent spawns
multiple child processes using the ROCm runtime, I see the following
assertion failure:

    ../../gdb/amd-dbgapi-target.c:1071: internal-error: process_one_event: Assertion `runtime_state == AMD_DBGAPI_RUNTIME_STATE_UNLOADED' failed.
    A problem internal to GDB has been detected,
    further debugging may prove unreliable.
    ----- Backtrace -----
    0x556e9a318540 gdb_internal_backtrace_1
            ../../gdb/bt-utils.c:122
    0x556e9a318540 _Z22gdb_internal_backtracev
            ../../gdb/bt-utils.c:168
    0x556e9a730224 internal_vproblem
            ../../gdb/utils.c:396
    0x556e9a7304e0 _Z15internal_verrorPKciS0_P13__va_list_tag
            ../../gdb/utils.c:476
    0x556e9a87aeb4 _Z18internal_error_locPKciS0_z
            ../../gdbsupport/errors.cc:58
    0x556e9a29f446 process_one_event
            ../../gdb/amd-dbgapi-target.c:1071
    0x556e9a29f446 process_event_queue
            ../../gdb/amd-dbgapi-target.c:1156
    0x556e9a29faf2 _ZN17amd_dbgapi_target4waitE6ptid_tP17target_waitstatus10enum_flagsI16target_wait_flagE
            ../../gdb/amd-dbgapi-target.c:1262
    0x556e9a6b0965 _Z11target_wait6ptid_tP17target_waitstatus10enum_flagsI16target_wait_flagE
            ../../gdb/target.c:2586
    0x556e9a4c221f do_target_wait_1
            ../../gdb/infrun.c:3876
    0x556e9a4d8489 operator()
            ../../gdb/infrun.c:3935
    0x556e9a4d8489 do_target_wait
            ../../gdb/infrun.c:3964
    0x556e9a4d8489 _Z20fetch_inferior_eventv
            ../../gdb/infrun.c:4365
    0x556e9a87b915 gdb_wait_for_event
            ../../gdbsupport/event-loop.cc:694
    0x556e9a87c3a9 gdb_wait_for_event
            ../../gdbsupport/event-loop.cc:593
    0x556e9a87c3a9 _Z16gdb_do_one_eventi
            ../../gdbsupport/event-loop.cc:217
    0x556e9a521689 start_event_loop
            ../../gdb/main.c:412
    0x556e9a521689 captured_command_loop
            ../../gdb/main.c:476
    0x556e9a523c04 captured_main
            ../../gdb/main.c:1320
    0x556e9a523c04 _Z8gdb_mainP18captured_main_args
            ../../gdb/main.c:1339
    0x556e9a24b1bf main
            ../../gdb/gdb.c:32
    ---------------------
    ../../gdb/amd-dbgapi-target.c:1071: internal-error: process_one_event: Assertion `runtime_state == AMD_DBGAPI_RUNTIME_STATE_UNLOADED' failed.
    A problem internal to GDB has been detected,

Before diving into why this error appears, let's explore how things are
expected to work in normal circumstances.  When a process being debugged
starts using the ROCm runtime, the following happens:

- The runtime registers itself to the driver.
- The driver creates a "runtime loaded" event and notifies the debugger
  that a new event is available by writing to a file descriptor which is
  registered in GDB's main event loop.
- GDB core calls the callback associated with this file descriptor
  (dbgapi_notifier_handler).  Because the amd-dbgapi-target is not
  pushed at this point, the handler pulls the "runtime loaded" event
  from the driver (this is the only event which can be available at this
  point) and eventually pushes the amd-dbgapi-target on the inferior's
  target stack.

In a nutshell, this is the expected AMDGPU runtime activation process.

From there, when new events are available regarding the GPU threads, the
same file descriptor is written to.  The callback sees that the
amd-dbgapi-target is pushed so marks the amd_dbgapi_async_event_handler.
This will later cause amd_dbgapi_target::wait to be called.  The wait
method pulls all the available events from the driver and handles them.
The wait method returns the information conveyed by the first event, the
other events are cached for later calls of the wait method.

Note that because we are under the wait method, we know that the
amd-dbgapi-target is pushed on the inferior target stack.  This implies
that the runtime activation event has been seen already.  As a
consequence, we cannot receive another event indicating that the runtime
gets activated.  This is what the failing assertion checks.

In the case when we have multiple inferiors however, there is a flaw in
what have been described above.  If one inferior (let's call it inferior
1) already has the amd-dbgapi-target pushed to its target stack and
another inferior (inferior 2) activates the ROCm runtime, here is what
can happen:

- The driver creates the runtime activation for inferior 2 and writes to
  the associated file descriptor.
- GDB has inferior 1 selected and calls target_wait for some reason.
- This prompts amd_dbgapi_target::wait to be called.  The method pulls
  all events from the driver, including the runtime activation event for
  inferior 2, leading to the insertion failure.

The fix for this problem is simple.  To avoid such problem, we need to
make sure that amd_dbgapi_target::wait only pulls events for the current
inferior from the driver.  This is what this patch implements.

This patch also includes a testcase which could fail before this patch.

This patch has been tested on a system with multiple GPUs which had more
chances to reproduce the original bug.  It has also been tested on top
of the downstream ROCgdb port which has more AMDGPU related tests.  The
testcase have been tested with `make check check-read1 check-readmore`.
---
 gdb/amd-dbgapi-target.c                       |   6 +-
 gdb/testsuite/gdb.rocm/multi-inferior-gpu.cpp | 111 ++++++++++++++++++
 gdb/testsuite/gdb.rocm/multi-inferior-gpu.exp |  86 ++++++++++++++
 3 files changed, 201 insertions(+), 2 deletions(-)
 create mode 100644 gdb/testsuite/gdb.rocm/multi-inferior-gpu.cpp
 create mode 100644 gdb/testsuite/gdb.rocm/multi-inferior-gpu.exp
  

Comments

Pedro Alves July 28, 2023, 6:01 p.m. UTC | #1
Hi Lancelot,

This LGTM with some nits below addressed.  No need for another round
of review.  Just post the updated patch, and merge it.

With that,
  Approved-By: Pedro Alves <pedro@palves.net>

On to the nits...

On 2023-06-30 15:57, Lancelot Six via Gdb-patches wrote:

> - The driver creates the runtime activation for inferior 2 and writes to
>   the associated file descriptor.
> - GDB has inferior 1 selected and calls target_wait for some reason.
> - This prompts amd_dbgapi_target::wait to be called.  The method pulls
>   all events from the driver, including the runtime activation event for
>   inferior 2, leading to the insertion failure.

insertion -> assertion.

> 
> The fix for this problem is simple.  To avoid such problem, we need to
> make sure that amd_dbgapi_target::wait only pulls events for the current
> inferior from the driver.  This is what this patch implements.
> 
> This patch also includes a testcase which could fail before this patch.
> 
> This patch has been tested on a system with multiple GPUs which had more
> chances to reproduce the original bug.  It has also been tested on top
> of the downstream ROCgdb port which has more AMDGPU related tests.  The
> testcase have been tested with `make check check-read1 check-readmore`.

"have been" -> "has been"

> ---
>  gdb/amd-dbgapi-target.c                       |   6 +-
>  gdb/testsuite/gdb.rocm/multi-inferior-gpu.cpp | 111 ++++++++++++++++++
>  gdb/testsuite/gdb.rocm/multi-inferior-gpu.exp |  86 ++++++++++++++
>  3 files changed, 201 insertions(+), 2 deletions(-)
>  create mode 100644 gdb/testsuite/gdb.rocm/multi-inferior-gpu.cpp
>  create mode 100644 gdb/testsuite/gdb.rocm/multi-inferior-gpu.exp
> 
> diff --git a/gdb/amd-dbgapi-target.c b/gdb/amd-dbgapi-target.c
> index 5565cf907fa..371f0683754 100644
> --- a/gdb/amd-dbgapi-target.c
> +++ b/gdb/amd-dbgapi-target.c
> @@ -1255,8 +1255,10 @@ amd_dbgapi_target::wait (ptid_t ptid, struct target_waitstatus *ws,
>    std::tie (event_ptid, gpu_waitstatus) = consume_one_event (ptid.pid ());
>    if (event_ptid == minus_one_ptid)
>      {
> -      /* Drain the events from the amd_dbgapi and preserve the ordering.  */
> -      process_event_queue ();
> +      /* Drain the events for the current inferior from the amd_dbgapi and
> +	 preserve the ordering.  */
> +      auto info = get_amd_dbgapi_inferior_info (current_inferior ());
> +      process_event_queue (info->process_id, AMD_DBGAPI_EVENT_KIND_NONE);

I think the process_event_queue's process_id parameter could stop having
a default argument.

>  
>        std::tie (event_ptid, gpu_waitstatus) = consume_one_event (ptid.pid ());
>        if (event_ptid == minus_one_ptid)
> diff --git a/gdb/testsuite/gdb.rocm/multi-inferior-gpu.cpp b/gdb/testsuite/gdb.rocm/multi-inferior-gpu.cpp
> new file mode 100644
> index 00000000000..828dc0cf7d4
> --- /dev/null
> +++ b/gdb/testsuite/gdb.rocm/multi-inferior-gpu.cpp
> @@ -0,0 +1,111 @@

...

> +      if (pid == 0)
> +	{
> +	  /* Exec to be fore the child to re-initialize the ROCm runtime.  */

I can't parse the

 "Exec to be fore the child"

comment.  I think you mean:

 "Exec the child"

?

> +	  if (execl (argv[0], argv[0], n) == -1)
> +	    {
> +	      perror ("Failed to exec");
> +	      return -1;
> +	    }
> +	}
> +    }
> +
> +  /* Wait for all children.  */
> +  int ws;
> +  pid_t ret;
> +  do
> +    ret = waitpid (-1, &ws, 0);
> +  while (!(ret == -1 && errno == ECHILD));

At <https://www.gnu.org/prep/standards/standards.html>, we have:

"Format do-while statements like this:
 
  do
    {
      a = foo (a);
    }
  while (a > 0);
"

IMO, this is more readable, and lets you keep the
variables in the scope:

  while (1)
   {
    int ws;
    pid_t ret = waitpid (-1, &ws, 0);
    if (ret == -1 && errno == ECHILD)
      break;
   }

> +
> +  /* Last break here.  */
> +  return 0;
> +}
> +
> +static int
> +child (int argc, char **argv)
> +{
> +  int dev_number;
> +  if (sscanf (argv[1], "%d", &dev_number) != 1)
> +    {
> +      fprintf (stderr, "Invalid argument \"%s\"\n", argv[1]);
> +      return -1;
> +    }
> +
> +  CHECK (hipSetDevice (dev_number));
> +  kern<<<1, 1>>> ();
> +  hipDeviceSynchronize ();
> +  return 0;
> +}
> +
> +/* When called with no argument, identify how many AMDGPU devices are
> +   available on the system and spawn one worker process per GPU.  If a
> +   command-line argument is provided, it is the index of the GPU to use.  */
> +
> +int
> +main (int argc, char **argv)
> +{
> +  if (argc <= 1)
> +    return parent (argc, argv);
> +  else
> +    return child (argc, argv);
> +}
> diff --git a/gdb/testsuite/gdb.rocm/multi-inferior-gpu.exp b/gdb/testsuite/gdb.rocm/multi-inferior-gpu.exp
> new file mode 100644
> index 00000000000..3e8934645e6
> --- /dev/null
> +++ b/gdb/testsuite/gdb.rocm/multi-inferior-gpu.exp
> @@ -0,0 +1,86 @@
> +# Copyright 2023 Free Software Foundation, Inc.
> +
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program.  If not, see <http://www.gnu.org/licenses/>.
> +
> +# This test checks that GDB can debug multiple inferior which uses all
> +# the ROCm runtime.

# This test checks that GDB can debug multiple inferiors that all use
# the ROCm runtime.

> +
> +load_lib rocm.exp
> +
> +standard_testfile .cpp
> +
> +require allow_hipcc_tests
> +require hip_devices_support_debug_multi_process
> +
> +if {[build_executable "failed to prepare" $testfile $srcfile {debug hip}]} {
> +    return
> +}
> +
> +proc do_test {} {
> +    clean_restart $::binfile
> +    gdb_test_no_output "set non-stop on"
> +    gdb_test_no_output "set detach-on-fork off"
> +    gdb_test_no_output "set follow-fork parent"
> +
> +    with_rocm_gpu_lock {
> +	gdb_breakpoint [gdb_get_line_number "Break here"]
> +	gdb_breakpoint kern allow-pending
> +	gdb_breakpoint [gdb_get_line_number "Last break here"]
> +
> +	# Run intil we reach the first breakpoint where we can figure

"Run intil" -> "Run until".


> +	# out how many children will be spawned.
> +	gdb_test "run" "hit Breakpoint.*"
> +
> +	set num_childs [get_integer_valueof "num_devices" 0]

num_childs -> num_children ?

> +	set bp_to_see $num_childs
> +	set stopped_threads [list]
> +
> +	gdb_test_multiple "continue -a" "continue to gpu breakpoints" {
> +	    -re "Thread ($::decimal\.$::decimal)\[^\r\n\]* hit Breakpoint\[^\r\n\]*, kern \(\)\[^\r\n\]*\r\n" {
> +		lappend stopped_threads $expect_out(1,string)
> +		incr bp_to_see -1
> +		if {$bp_to_see != 0} {
> +		    exp_continue
> +		} else {
> +		    pass $gdb_test_name
> +		}
> +	    }
> +	    -re "^\[^\r\n\]*\r\n" {
> +		exp_continue
> +	    }
> +	}

Since this is non-stop, this "continue -a" will cause the first
stop to print the prompt, and other stops to not print it.  The
"-re" cases above don't explicitly handle the prompt, which seems
brittle to me.

"continue -a&" instead of ""continue -a" immediately prints the
prompt.  It would be better IMO to explicitly consume the prompt
with that.  Like (untested):

    gdb_test_multiple "continue -a &" "continue to gpu breakpoints" {
        -re "Continuing\.\r\n$gdb_prompt " {
            pass $gdb_test_name
        }
    }
    gdb_test_multiple "" "wait for gpu stops {
        -re "Thread ($::decimal\.$::decimal)\[^\r\n\]* hit Breakpoint\[^\r\n\]*, kern \(\)\[^\r\n\]*\r\n" {
            lappend stopped_threads $expect_out(1,string)
            incr bp_to_see -1
            if {$bp_to_see != 0} {
               exp_continue
            } else {
               pass $gdb_test_name
            }
        }
    }

> +
> +	# Continue all the children processes until they exit.

Maybe say:

	# Continue all the GPU kernels until all the children processes exit.

If I am not mistaken, the children processes on the CPU side are already
running at this point, only the GPU kernels were stopped.

> +	foreach thread $stopped_threads {

I would rename "stopped_threads" -> stopped_gdb_threads.

That's it.  Thanks for the fix!

Pedro Alves

> +	    set infnumber [lindex [split $thread .] 0]
> +	    gdb_test "thread $thread" "Switching to thread.*"
> +	    gdb_test_multiple "continue $thread" "" {
> +		-re "\\\[Inferior $infnumber \[^\n\r\]* exited normally\\]\r\n$::gdb_prompt " {
> +		    pass $gdb_test_name
> +		}
> +	    }
> +	}
> +
> +	gdb_test_multiple "" "reach breakpoint in main" {
> +	    -re "hit Breakpoint.*parent" {
> +		pass $gdb_test_name
> +	    }
> +	}
> +	# Select main inferior
> +	gdb_test "inferior 1" "Switching to inferior 1.*"
> +	gdb_continue_to_end "" "continue -a" 1
> +    }
> +}
> +
> +do_test
  
Pedro Alves July 28, 2023, 6:04 p.m. UTC | #2
On 2023-07-28 19:01, Pedro Alves wrote:
> I would rename "stopped_threads" -> stopped_gdb_threads.

Sigh.  I meant stopped_gpu_threads.  Finger memory...
  

Patch

diff --git a/gdb/amd-dbgapi-target.c b/gdb/amd-dbgapi-target.c
index 5565cf907fa..371f0683754 100644
--- a/gdb/amd-dbgapi-target.c
+++ b/gdb/amd-dbgapi-target.c
@@ -1255,8 +1255,10 @@  amd_dbgapi_target::wait (ptid_t ptid, struct target_waitstatus *ws,
   std::tie (event_ptid, gpu_waitstatus) = consume_one_event (ptid.pid ());
   if (event_ptid == minus_one_ptid)
     {
-      /* Drain the events from the amd_dbgapi and preserve the ordering.  */
-      process_event_queue ();
+      /* Drain the events for the current inferior from the amd_dbgapi and
+	 preserve the ordering.  */
+      auto info = get_amd_dbgapi_inferior_info (current_inferior ());
+      process_event_queue (info->process_id, AMD_DBGAPI_EVENT_KIND_NONE);
 
       std::tie (event_ptid, gpu_waitstatus) = consume_one_event (ptid.pid ());
       if (event_ptid == minus_one_ptid)
diff --git a/gdb/testsuite/gdb.rocm/multi-inferior-gpu.cpp b/gdb/testsuite/gdb.rocm/multi-inferior-gpu.cpp
new file mode 100644
index 00000000000..828dc0cf7d4
--- /dev/null
+++ b/gdb/testsuite/gdb.rocm/multi-inferior-gpu.cpp
@@ -0,0 +1,111 @@ 
+/* This testcase is part of GDB, the GNU debugger.
+
+   Copyright 2023 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <unistd.h>
+#include <hip/hip_runtime.h>
+
+#define CHECK(cmd)                                                           \
+  {                                                                          \
+    hipError_t error = cmd;                                                  \
+    if (error != hipSuccess)                                                 \
+      {                                                                      \
+	fprintf (stderr, "error: '%s'(%d) at %s:%d\n",                       \
+		 hipGetErrorString (error), error, __FILE__, __LINE__);      \
+	exit (EXIT_FAILURE);                                                 \
+      }                                                                      \
+  }
+
+__global__ void
+kern ()
+{
+  asm ("s_sleep 1");
+}
+
+/* Spawn one child process per detected GPU.  */
+
+static int
+parent (int argc, char **argv)
+{
+  /* Identify how many GPUs we have, and spawn one child for each.  */
+  int num_devices;
+  CHECK (hipGetDeviceCount (&num_devices));
+
+  /* Break here.  */
+
+  for (int i = 0; i < num_devices; i++)
+    {
+      char n[32] = {};
+      snprintf (n, sizeof (n), "%d", i);
+      pid_t pid = fork ();
+      if (pid == -1)
+	{
+	  perror ("Fork failed");
+	  return -1;
+	}
+
+      if (pid == 0)
+	{
+	  /* Exec to be fore the child to re-initialize the ROCm runtime.  */
+	  if (execl (argv[0], argv[0], n) == -1)
+	    {
+	      perror ("Failed to exec");
+	      return -1;
+	    }
+	}
+    }
+
+  /* Wait for all children.  */
+  int ws;
+  pid_t ret;
+  do
+    ret = waitpid (-1, &ws, 0);
+  while (!(ret == -1 && errno == ECHILD));
+
+  /* Last break here.  */
+  return 0;
+}
+
+static int
+child (int argc, char **argv)
+{
+  int dev_number;
+  if (sscanf (argv[1], "%d", &dev_number) != 1)
+    {
+      fprintf (stderr, "Invalid argument \"%s\"\n", argv[1]);
+      return -1;
+    }
+
+  CHECK (hipSetDevice (dev_number));
+  kern<<<1, 1>>> ();
+  hipDeviceSynchronize ();
+  return 0;
+}
+
+/* When called with no argument, identify how many AMDGPU devices are
+   available on the system and spawn one worker process per GPU.  If a
+   command-line argument is provided, it is the index of the GPU to use.  */
+
+int
+main (int argc, char **argv)
+{
+  if (argc <= 1)
+    return parent (argc, argv);
+  else
+    return child (argc, argv);
+}
diff --git a/gdb/testsuite/gdb.rocm/multi-inferior-gpu.exp b/gdb/testsuite/gdb.rocm/multi-inferior-gpu.exp
new file mode 100644
index 00000000000..3e8934645e6
--- /dev/null
+++ b/gdb/testsuite/gdb.rocm/multi-inferior-gpu.exp
@@ -0,0 +1,86 @@ 
+# Copyright 2023 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+# This test checks that GDB can debug multiple inferior which uses all
+# the ROCm runtime.
+
+load_lib rocm.exp
+
+standard_testfile .cpp
+
+require allow_hipcc_tests
+require hip_devices_support_debug_multi_process
+
+if {[build_executable "failed to prepare" $testfile $srcfile {debug hip}]} {
+    return
+}
+
+proc do_test {} {
+    clean_restart $::binfile
+    gdb_test_no_output "set non-stop on"
+    gdb_test_no_output "set detach-on-fork off"
+    gdb_test_no_output "set follow-fork parent"
+
+    with_rocm_gpu_lock {
+	gdb_breakpoint [gdb_get_line_number "Break here"]
+	gdb_breakpoint kern allow-pending
+	gdb_breakpoint [gdb_get_line_number "Last break here"]
+
+	# Run intil we reach the first breakpoint where we can figure
+	# out how many children will be spawned.
+	gdb_test "run" "hit Breakpoint.*"
+
+	set num_childs [get_integer_valueof "num_devices" 0]
+	set bp_to_see $num_childs
+	set stopped_threads [list]
+
+	gdb_test_multiple "continue -a" "continue to gpu breakpoints" {
+	    -re "Thread ($::decimal\.$::decimal)\[^\r\n\]* hit Breakpoint\[^\r\n\]*, kern \(\)\[^\r\n\]*\r\n" {
+		lappend stopped_threads $expect_out(1,string)
+		incr bp_to_see -1
+		if {$bp_to_see != 0} {
+		    exp_continue
+		} else {
+		    pass $gdb_test_name
+		}
+	    }
+	    -re "^\[^\r\n\]*\r\n" {
+		exp_continue
+	    }
+	}
+
+	# Continue all the children processes until they exit.
+	foreach thread $stopped_threads {
+	    set infnumber [lindex [split $thread .] 0]
+	    gdb_test "thread $thread" "Switching to thread.*"
+	    gdb_test_multiple "continue $thread" "" {
+		-re "\\\[Inferior $infnumber \[^\n\r\]* exited normally\\]\r\n$::gdb_prompt " {
+		    pass $gdb_test_name
+		}
+	    }
+	}
+
+	gdb_test_multiple "" "reach breakpoint in main" {
+	    -re "hit Breakpoint.*parent" {
+		pass $gdb_test_name
+	    }
+	}
+	# Select main inferior
+	gdb_test "inferior 1" "Switching to inferior 1.*"
+	gdb_continue_to_end "" "continue -a" 1
+    }
+}
+
+do_test