[PATCH/7.10,2/2] gdbserver: Fix non-stop / fork / step-over issues
Commit Message
Ref: https://sourceware.org/ml/gdb-patches/2015-07/msg00868.html
This adds a test that has a multithreaded program have several threads
continuously fork, while another thread continuously steps over a
breakpoint.
This exposes several intertwined issues, which this patch addresses:
- When we're stopping and suspending threads, some thread may fork,
and we missed setting its suspend count to 1, like we do when a new
clone/thread is detected. When we next unsuspend threads, the fork
child's suspend count goes below 0, which is bogus and fails an
assertion.
- If a step-over is cancelled because a signal arrives, but then gdb
is not interested in the signal, we pass the signal straight back
to the inferior. However, we miss that we need to re-increment the
suspend counts of all other threads that had been paused for the
step-over. As a result, other threads indefinitely end up stuck
stopped.
- OTOH, if a thread exits the whole process just while we're stopping
threads to start a step-over, gdbserver crashes or hits several
assertions.
- If a detach request comes in just while gdbserver is handling a
step-over (in the test at hand, this is GDB detaching the fork
child), gdbserver internal errors in stabilize_thread's helpers,
which assert that all thread's suspend counts are 0 (otherwise we
wouldn't be able to move threads out of the jump pads). The
suspend counts aren't 0 while a step-over is in progress, because
all threads but the one stepping past the breakpoint must remain
paused until the step-over finishes and the breakpoint can be
reinserted.
- Occasionally, we see "BAD - reinserting but not stepping." being
output (from within linux_resume_one_lwp_throw). That was because
GDB pokes memory while gdbserver is busy with a step-over, and that
suspends threads, and then re-resumes them with proceed_one_lwp,
which missed another reason to tell linux_resume_one_lwp that the
thread should be set back to stepping.
- In a couple places, we were resuming threads that are meant to be
suspended. E.g., when a vCont;c/s request for thread B comes in
just while gdbserver is stepping thread A past a breakpoint. The
resume for thread B must be deferred until the step-over finishes.
- The test runs with both "set detach-on-fork" on and off. When off,
it exercises the case of GDB detaching the fork child explicitly.
When on, it exercises the case of gdb resuming the child
explicitly. In the "off" case, gdb seems to exponentially become
slower as new inferiors are created. This is _very_ noticeable as
with only 100 inferiors gdb is crawling already, which makes the
test take quite a bit to run. For that reason, I've disabled the
"off" variant for now.
- The test fails occasionally with the native target, because several
code paths aren't expecting that a stopped thread may disappear
(the test has the leader thread of the parent process exit the
whole process, just while gdb is handling an event for a non-leader
thread). E.g.,:
[Thread 0x7ffff67bd700 (LWP 11210) exited]
Cannot find user-level thread for LWP 11217: generic error
(gdb) FAIL: gdb.threads/fork-plus-threads-2.exp: detach-on-fork=on: inferior 1 exited
[Thread 0x7ffff7fc1740 (LWP 11203) exited]
info threads
Id Target Id Frame
12 Thread 0x7ffff2fb6700 (LWP 11217) (running)
The current thread <Thread ID 1> has terminated. See `help thread'.
(gdb) FAIL: gdb.threads/fork-plus-threads-2.exp: detach-on-fork=on: no threads left
I fixed some of these issues recently, but there's a lot more to
do. Fixing that one just exposes other similar problems elsewhere.
Meanwhile, I've filed PR18749 and kfailed the test for native.
gdb/ChangeLog:
2015-07-31 Pedro Alves <palves@redhat.com>
PR gdb/18749
* target/waitstatus.h (enum target_stop_reason)
<TARGET_STOPPED_BY_SINGLE_STEP>: New value.
gdb/gdbserver/ChangeLog:
2015-07-31 Pedro Alves <palves@redhat.com>
PR gdb/18749
* linux-low.c (handle_extended_wait): Set the fork child's suspend
count if stopping and suspending threads.
(check_stopped_by_breakpoint): If stopped by trace, set the LWP's
stop reason to TARGET_STOPPED_BY_SINGLE_STEP.
(linux_detach): Complete an ongoing step-over.
(lwp_suspended_inc, lwp_suspended_decr): New functions. Use
throughout.
(resume_stopped_resumed_lwps): Don't resume a suspended thread.
(linux_wait_1): If passing a signal to the inferior after
finishing a step-over, unsuspend and re-resume all lwps. If we
see a single-step event but the thread should be continuing, don't
pass the trap to gdb.
(stuck_in_jump_pad_callback, move_out_of_jump_pad_callback): Use
internal_error instead of gdb_assert.
(enqueue_pending_signal): New function.
(check_ptrace_stopped_lwp_gone): Add debug output.
(start_step_over): Handle the case of the LWP we're about to
step-over exiting. Use internal_error instead of gdb_assert.
(complete_ongoing_step_over): New function.
(linux_resume_one_thread): Don't resume a suspended thread.
(proceed_one_lwp): If the LWP is stepping over a breakpoint, reset
it stepping.
(proceed_all_lwps): If a step-over fails to start, look for
another thread that might need a step-over.
gdb/testsuite/ChangeLog:
2015-07-31 Pedro Alves <palves@redhat.com>
PR gdb/18749
* gdb.threads/fork-plus-threads-2.exp: New file.
* gdb.threads/fork-plus-threads-2.c: New file.
---
gdb/gdbserver/linux-low.c | 267 +++++++++++++++++++---
gdb/target/waitstatus.h | 5 +-
gdb/testsuite/gdb.threads/fork-plus-threads-2.c | 129 +++++++++++
gdb/testsuite/gdb.threads/fork-plus-threads-2.exp | 116 ++++++++++
4 files changed, 480 insertions(+), 37 deletions(-)
create mode 100644 gdb/testsuite/gdb.threads/fork-plus-threads-2.c
create mode 100644 gdb/testsuite/gdb.threads/fork-plus-threads-2.exp
Comments
On 7/31/2015 10:03 AM, Pedro Alves wrote:
> Ref: https://sourceware.org/ml/gdb-patches/2015-07/msg00868.html
>
> This adds a test that has a multithreaded program have several threads
> continuously fork, while another thread continuously steps over a
> breakpoint.
Wow.
>
> This exposes several intertwined issues, which this patch addresses:
>
Thanks again for digging into these issues.
---snip---
>
> - The test runs with both "set detach-on-fork" on and off. When off,
> it exercises the case of GDB detaching the fork child explicitly.
> When on, it exercises the case of gdb resuming the child
> explicitly. In the "off" case, gdb seems to exponentially become
> slower as new inferiors are created. This is _very_ noticeable as
> with only 100 inferiors gdb is crawling already, which makes the
> test take quite a bit to run. For that reason, I've disabled the
> "off" variant for now.
Bummer. I was going to ask whether this use-case justifies disabling
the feature completely, but since the whole follow-fork mechanism is of
limited usefulness without exec events, the question is likely moot
anyway.
Do you have any thoughts about whether this slowdown is caused by the
fork event machinery or by some more general gdbserver multiple
inferior problem?
Are you planning to look at the slowdown? Can I help out? I have an
interest in having detach-on-fork 'off' enabled. :-S
thanks
--Don
On 07/31/2015 07:04 PM, Don Breazeal wrote:
> On 7/31/2015 10:03 AM, Pedro Alves wrote:
>> Ref: https://sourceware.org/ml/gdb-patches/2015-07/msg00868.html
>>
>> This adds a test that has a multithreaded program have several threads
>> continuously fork, while another thread continuously steps over a
>> breakpoint.
>
> Wow.
>
If gdb survives these stress tests, it can hold up to anything. :-)
>> - The test runs with both "set detach-on-fork" on and off. When off,
>> it exercises the case of GDB detaching the fork child explicitly.
>> When on, it exercises the case of gdb resuming the child
>> explicitly. In the "off" case, gdb seems to exponentially become
>> slower as new inferiors are created. This is _very_ noticeable as
>> with only 100 inferiors gdb is crawling already, which makes the
>> test take quite a bit to run. For that reason, I've disabled the
>> "off" variant for now.
>
> Bummer. I was going to ask whether this use-case justifies disabling
> the feature completely,
Note that this being a stress test, may not be representative of a
real work load. I'm assuming most real use cases won't be
so demanding.
> but since the whole follow-fork mechanism is of
> limited usefulness without exec events, the question is likely moot
> anyway.
Yeah. There are use cases with fork alone, but combined with exec is
much more useful. I'll take a look at your exec patches soon; I'm very
much looking forward to have that in.
>
> Do you have any thoughts about whether this slowdown is caused by the
> fork event machinery or by some more general gdbserver multiple
> inferior problem?
Not sure.
The number of forks live at a given time in the test is constant
-- each thread forks and waits for the child to exit until it forks
again. But if you run the test, you see that the first
few inferiors are created quickly, and then as the inferior number
grows, new inferiors are added at a slower and slower.
I'd suspect the problem to be on the gdb side. But the test
fails on native, so it's not easy to get gdbserver out of
the picture for a quick check.
It feels like some data structures are leaking, but
still reacheable, and then a bunch of linear walks end up costing
more and more. I once added the prune_inferiors call at the end
of normal_stop to handle a slowdown like this. It feels like
something similar to that.
With detach "on" alone, it takes under 2 seconds against gdbserver
for me.
If I remove the breakpoint from the test, and reenable both detach on/off,
it ends in around 10-20 seconds. That's still a lot slower
than "detach on" along, but gdb has to insert/remove breakpoints in the
child and load its symbols (well, it could avoid that, given the
child is a clone of the parent, but we're not there yet), so
not entirely unexpected.
But pristine, with both detach on/off, it takes almost 2 minutes
here. ( and each thread only spawns 10 forks, my first attempt
was shooting for 100 :-) )
I also suspected all the thread stop/restarting gdbserver does
both to step over breakpoints, and to insert/remove breakpoints.
But then again with detach on, there are 12 threads, with detach
off, at most 22. So that'd be odd. Unless the data structure
leaks are on gdbserver's side. But then I'd think that tests
like attach-many-short-lived-threads.exp or non-stop-fair-events.exp
would have already exposed something like that.
>
> Are you planning to look at the slowdown?
Nope, at least not in the immediate future.
> Can I help out? I have an
> interest in having detach-on-fork 'off' enabled. :-S
That'd be much appreciated. :-) At least identifying the
culprit would be very nice. I too would love for our
multi-process support to be rock solid.
Thanks,
Pedro Alves
Pedro Alves <palves@redhat.com> writes:
> I fixed some of these issues recently, but there's a lot more to
> do. Fixing that one just exposes other similar problems elsewhere.
> Meanwhile, I've filed PR18749 and kfailed the test for native.
This test case also exposes the issue on arm-linux with gdbserver,
(gdb) PASS: gdb.threads/fork-plus-threads-2.exp: detach-on-fork=on: continue &
[New Thread 29905]^M
[New Thread 29900]^M
[New Thread 29895]^M
[New Thread 29898]^M
[New Thread 29902]^M
[New Thread 29896]^M
[New Thread 29903]^M
[New Thread 29901]^M
[New Thread 29899]^M
[New Thread 29904]^M
[New Thread 29897]^M
Error in testing breakpoint condition:^M
Cannot access memory at address 0x11094^M
FAIL: gdb.threads/fork-plus-threads-2.exp: detach-on-fork=on: inferior 1 exited
Remote debugging from host 127.0.0.1^M
^M
Child exited with status 0^M
GDBserver exiting^M
../../binutils-gdb/gdb/thread.c:936: internal-error: finish_thread_state: Assertion `tp' failed.^M
A problem internal to GDB has been detected,^M
further debugging may prove unreliable.^M
Quit this debugging session? (y or n) monitor exit^M
Please answer y or n.^M
../../binutils-gdb/gdb/thread.c:936: internal-error: finish_thread_state: Assertion `tp' failed.^
On 08/03/2015 04:14 PM, Yao Qi wrote:
> Pedro Alves <palves@redhat.com> writes:
>
>> I fixed some of these issues recently, but there's a lot more to
>> do. Fixing that one just exposes other similar problems elsewhere.
>> Meanwhile, I've filed PR18749 and kfailed the test for native.
>
> This test case also exposes the issue on arm-linux with gdbserver,
>
> (gdb) PASS: gdb.threads/fork-plus-threads-2.exp: detach-on-fork=on: continue &
> [New Thread 29905]^M
> [New Thread 29900]^M
> [New Thread 29895]^M
> [New Thread 29898]^M
> [New Thread 29902]^M
> [New Thread 29896]^M
> [New Thread 29903]^M
> [New Thread 29901]^M
> [New Thread 29899]^M
> [New Thread 29904]^M
> [New Thread 29897]^M
> Error in testing breakpoint condition:^M
> Cannot access memory at address 0x11094^M
> FAIL: gdb.threads/fork-plus-threads-2.exp: detach-on-fork=on: inferior 1 exited
Ah, OK, so I guess it'll fail on all targets where gdbserver
does not handle the breakpoint condition server-side.
> Remote debugging from host 127.0.0.1^M
> ^M
> Child exited with status 0^M
> GDBserver exiting^M
> ../../binutils-gdb/gdb/thread.c:936: internal-error: finish_thread_state: Assertion `tp' failed.^M
> A problem internal to GDB has been detected,^M
> further debugging may prove unreliable.^M
> Quit this debugging session? (y or n) monitor exit^M
> Please answer y or n.^M
> ../../binutils-gdb/gdb/thread.c:936: internal-error: finish_thread_state: Assertion `tp' failed.^
I've seem this too while developing the patch. I saw it happen when the
connection is abruptly closed while there's a finish_thread_state cleanup
installed (while gdb is handling an event, within fetch_inferior_event). The
connection-close deletes all threads, and the cleanup wants to finish the
state of a now-nonexisting thread. See [palves/fork_stale_running] on my github.
Thanks,
Pedro Alves
On 7/31/2015 12:02 PM, Pedro Alves wrote:
> On 07/31/2015 07:04 PM, Don Breazeal wrote:
>> On 7/31/2015 10:03 AM, Pedro Alves wrote:
>>> Ref: https://sourceware.org/ml/gdb-patches/2015-07/msg00868.html
>>>
>>> This adds a test that has a multithreaded program have several threads
>>> continuously fork, while another thread continuously steps over a
>>> breakpoint.
>>
>> Wow.
>>
>
> If gdb survives these stress tests, it can hold up to anything. :-)
>
>>> - The test runs with both "set detach-on-fork" on and off. When off,
>>> it exercises the case of GDB detaching the fork child explicitly.
>>> When on, it exercises the case of gdb resuming the child
>>> explicitly. In the "off" case, gdb seems to exponentially become
>>> slower as new inferiors are created. This is _very_ noticeable as
>>> with only 100 inferiors gdb is crawling already, which makes the
>>> test take quite a bit to run. For that reason, I've disabled the
>>> "off" variant for now.
>>
>> Bummer. I was going to ask whether this use-case justifies disabling
>> the feature completely,
>
> Note that this being a stress test, may not be representative of a
> real work load. I'm assuming most real use cases won't be
> so demanding.
>
>> but since the whole follow-fork mechanism is of
>> limited usefulness without exec events, the question is likely moot
>> anyway.
>
> Yeah. There are use cases with fork alone, but combined with exec is
> much more useful. I'll take a look at your exec patches soon; I'm very
> much looking forward to have that in.
>
>>
>> Do you have any thoughts about whether this slowdown is caused by the
>> fork event machinery or by some more general gdbserver multiple
>> inferior problem?
>
> Not sure.
>
> The number of forks live at a given time in the test is constant
> -- each thread forks and waits for the child to exit until it forks
> again. But if you run the test, you see that the first
> few inferiors are created quickly, and then as the inferior number
> grows, new inferiors are added at a slower and slower.
> I'd suspect the problem to be on the gdb side. But the test
> fails on native, so it's not easy to get gdbserver out of
> the picture for a quick check.
>
> It feels like some data structures are leaking, but
> still reacheable, and then a bunch of linear walks end up costing
> more and more. I once added the prune_inferiors call at the end
> of normal_stop to handle a slowdown like this. It feels like
> something similar to that.
>
> With detach "on" alone, it takes under 2 seconds against gdbserver
> for me.
>
> If I remove the breakpoint from the test, and reenable both detach on/off,
> it ends in around 10-20 seconds. That's still a lot slower
> than "detach on" along, but gdb has to insert/remove breakpoints in the
> child and load its symbols (well, it could avoid that, given the
> child is a clone of the parent, but we're not there yet), so
> not entirely unexpected.
>
> But pristine, with both detach on/off, it takes almost 2 minutes
> here. ( and each thread only spawns 10 forks, my first attempt
> was shooting for 100 :-) )
>
> I also suspected all the thread stop/restarting gdbserver does
> both to step over breakpoints, and to insert/remove breakpoints.
> But then again with detach on, there are 12 threads, with detach
> off, at most 22. So that'd be odd. Unless the data structure
> leaks are on gdbserver's side. But then I'd think that tests
> like attach-many-short-lived-threads.exp or non-stop-fair-events.exp
> would have already exposed something like that.
>
>>
>> Are you planning to look at the slowdown?
>
> Nope, at least not in the immediate future.
>
>> Can I help out? I have an
>> interest in having detach-on-fork 'off' enabled. :-S
>
> That'd be much appreciated. :-) At least identifying the
> culprit would be very nice. I too would love for our
> multi-process support to be rock solid.
>
Hi Pedro,
I spent some time looking at this, and I found at least one of the
culprits affecting performance. Without going through the details of
how I arrived at this conclusion, if I insert
gdb_test_no_output "set sysroot /"
just before the call to runto_main, it cuts the wall clock time by at
least half. Running with just the 'detach-on-fork=off' case, it went
from 41 secs to 20 secs on one system, and 1:21 to 0:27 and 1:50 to 0:41
on another. Successive runs without set sysroot resulted in
successively decreasing run times, presumably due to filesystem caching.
I ran strace -cw to collect wall clock time (strace 4.9 and above
support '-w' for wall time), and saw this:
Without set sysroot /:
% time seconds usecs/call calls errors syscall^M
------ ----------- ----------- --------- --------- ----------------^M
25.90 14.620339 4 3666141 202 ptrace^M
25.21 14.229421 81 175135 57 select^M
14.42 8.139715 13 641874 7 write^M
10.65 6.012699 4 1397576 670469 read^M
7.52 4.245209 4 1205014 104 wait4^M
4.90 2.765111 3 847985 rt_sigprocmask^M
With set sysroot /:
% time seconds usecs/call calls errors syscall^M
------ ----------- ----------- --------- --------- ----------------^M
32.91 6.885008 148 46665 43 select^M
21.59 4.516311 4 1158530 202 ptrace^M
11.15 2.332491 13 184229 2 write^M
9.07 1.897401 4 422122 203552 read^M
6.77 1.415918 42 34076 53 open^M
6.27 1.312490 3 378702 103 wait4^M
4.00 0.835731 3 262195 rt_sigprocmask^M
The # calls and times for each case varied from run to run, but the
relative proportions stayed reasonably similar. I'm not sure why the
unmodified case has so many more calls to ptrace, but it was not an
anomaly, I saw this in multiple runs.
Note that I used the original version of the test that you posted, not
the update on your branch. Also, I didn't make the set sysroot command
conditional on running with a remote or gdbserver target, since it was
just an experiment.
Do you think there is more to the slowdown than this? As you said
above, detach-on-fork 'off' is going to take longer than 'on'. It may
be a little while before I can get back to this, so I thought I'd share
what I found. Let me know if you think this change will be sufficient.
thanks
--Don
@@ -268,6 +268,8 @@ static int lwp_is_marked_dead (struct lwp_info *lwp);
static void proceed_all_lwps (void);
static int finish_step_over (struct lwp_info *lwp);
static int kill_lwp (unsigned long lwpid, int signo);
+static void enqueue_pending_signal (struct lwp_info *lwp, int signal, siginfo_t *info);
+static void complete_ongoing_step_over (void);
/* When the event-loop is doing a step-over, this points at the thread
being stepped. */
@@ -486,6 +488,15 @@ handle_extended_wait (struct lwp_info *event_lwp, int wstat)
child_thr->last_resume_kind = resume_stop;
child_thr->last_status.kind = TARGET_WAITKIND_STOPPED;
+ /* If we're suspending all threads, leave this one suspended
+ too. */
+ if (stopping_threads == STOPPING_AND_SUSPENDING_THREADS)
+ {
+ if (debug_threads)
+ debug_printf ("HEW: leaving child suspended\n");
+ child_lwp->suspended = 1;
+ }
+
parent_proc = get_thread_process (event_thr);
child_proc->attached = parent_proc->attached;
clone_all_breakpoints (&child_proc->breakpoints,
@@ -685,6 +696,8 @@ check_stopped_by_breakpoint (struct lwp_info *lwp)
debug_printf ("CSBB: %s stopped by trace\n",
target_pid_to_str (ptid_of (thr)));
}
+
+ lwp->stop_reason = TARGET_STOPPED_BY_SINGLE_STEP;
}
}
}
@@ -1315,6 +1328,11 @@ linux_detach (int pid)
if (process == NULL)
return -1;
+ /* As there's a step over already in progress, let it finish first,
+ otherwise nesting a stabilize_threads operation on top gets real
+ messy. */
+ complete_ongoing_step_over ();
+
/* Stop all threads before detaching. First, ptrace requires that
the thread is stopped to sucessfully detach. Second, thread_db
may need to uninstall thread event breakpoints from memory, which
@@ -1683,6 +1701,39 @@ not_stopped_callback (struct inferior_list_entry *entry, void *arg)
return 0;
}
+/* Increment LWP's suspend count. */
+
+static void
+lwp_suspended_inc (struct lwp_info *lwp)
+{
+ lwp->suspended++;
+
+ if (debug_threads && lwp->suspended > 4)
+ {
+ struct thread_info *thread = get_lwp_thread (lwp);
+
+ debug_printf ("LWP %ld has a suspiciously high suspend count,"
+ " suspended=%d\n", lwpid_of (thread), lwp->suspended);
+ }
+}
+
+/* Decrement LWP's suspend count. */
+
+static void
+lwp_suspended_decr (struct lwp_info *lwp)
+{
+ lwp->suspended--;
+
+ if (lwp->suspended < 0)
+ {
+ struct thread_info *thread = get_lwp_thread (lwp);
+
+ internal_error (__FILE__, __LINE__,
+ "unsuspend LWP %ld, suspended=%d\n", lwpid_of (thread),
+ lwp->suspended);
+ }
+}
+
/* This function should only be called if the LWP got a SIGTRAP.
Handle any tracepoint steps or hits. Return true if a tracepoint
@@ -1700,7 +1751,7 @@ handle_tracepoints (struct lwp_info *lwp)
uninsert tracepoints. To do this, we temporarily pause all
threads, unpatch away, and then unpause threads. We need to make
sure the unpausing doesn't resume LWP too. */
- lwp->suspended++;
+ lwp_suspended_inc (lwp);
/* And we need to be sure that any all-threads-stopping doesn't try
to move threads out of the jump pads, as it could deadlock the
@@ -1716,7 +1767,7 @@ handle_tracepoints (struct lwp_info *lwp)
actions. */
tpoint_related_event |= tracepoint_was_hit (tinfo, lwp->stop_pc);
- lwp->suspended--;
+ lwp_suspended_decr (lwp);
gdb_assert (lwp->suspended == 0);
gdb_assert (!stabilizing_threads || lwp->collecting_fast_tracepoint);
@@ -2176,10 +2227,13 @@ linux_low_filter_event (int lwpid, int wstat)
/* Note that TRAP_HWBKPT can indicate either a hardware breakpoint
or hardware watchpoint. Check which is which if we got
- TARGET_STOPPED_BY_HW_BREAKPOINT. */
+ TARGET_STOPPED_BY_HW_BREAKPOINT. Likewise, we may have single
+ stepped an instruction that triggered a watchpoint. In that
+ case, on some architectures (such as x86), instead of
+ TRAP_HWBKPT, si_code indicates TRAP_TRACE, and we need to check
+ the debug registers separately. */
if (WIFSTOPPED (wstat) && WSTOPSIG (wstat) == SIGTRAP
- && (child->stop_reason == TARGET_STOPPED_BY_NO_REASON
- || child->stop_reason == TARGET_STOPPED_BY_HW_BREAKPOINT))
+ && child->stop_reason != TARGET_STOPPED_BY_SW_BREAKPOINT)
check_stopped_by_watchpoint (child);
if (!have_stop_pc)
@@ -2238,6 +2292,7 @@ resume_stopped_resumed_lwps (struct inferior_list_entry *entry)
struct lwp_info *lp = get_thread_lwp (thread);
if (lp->stopped
+ && !lp->suspended
&& !lp->status_pending_p
&& thread->last_resume_kind != resume_stop
&& thread->last_status.kind == TARGET_WAITKIND_IGNORE)
@@ -2608,9 +2663,7 @@ unsuspend_one_lwp (struct inferior_list_entry *entry, void *except)
if (lwp == except)
return 0;
- lwp->suspended--;
-
- gdb_assert (lwp->suspended >= 0);
+ lwp_suspended_decr (lwp);
return 0;
}
@@ -2703,7 +2756,7 @@ linux_stabilize_threads (void)
lwp = get_thread_lwp (current_thread);
/* Lock it. */
- lwp->suspended++;
+ lwp_suspended_inc (lwp);
if (ourstatus.value.sig != GDB_SIGNAL_0
|| current_thread->last_resume_kind == resume_stop)
@@ -3089,8 +3142,25 @@ linux_wait_1 (ptid_t ptid,
info_p = &info;
else
info_p = NULL;
- linux_resume_one_lwp (event_child, event_child->stepping,
- WSTOPSIG (w), info_p);
+
+ if (step_over_finished)
+ {
+ /* We cancelled this thread's step-over above. We still
+ need to unsuspend all other LWPs, and set then back
+ running again while the signal handler runs. */
+ unsuspend_all_lwps (event_child);
+
+ /* Enqueue the pending signal info so that proceed_all_lwps
+ doesn't lose it. */
+ enqueue_pending_signal (event_child, WSTOPSIG (w), info_p);
+
+ proceed_all_lwps ();
+ }
+ else
+ {
+ linux_resume_one_lwp (event_child, event_child->stepping,
+ WSTOPSIG (w), info_p);
+ }
return ignore_event (ourstatus);
}
@@ -3111,8 +3181,15 @@ linux_wait_1 (ptid_t ptid,
|| (current_thread->last_resume_kind == resume_step
&& !in_step_range)
|| event_child->stop_reason == TARGET_STOPPED_BY_WATCHPOINT
- || (!step_over_finished && !in_step_range
- && !bp_explains_trap && !trace_event)
+ || (!in_step_range
+ && !bp_explains_trap
+ && !trace_event
+ /* A step-over was finished just now? */
+ && !step_over_finished
+ /* A step-over had been finished previously,
+ and the single-step was left pending? */
+ && !(current_thread->last_resume_kind == resume_continue
+ && event_child->stop_reason == TARGET_STOPPED_BY_SINGLE_STEP))
|| (gdb_breakpoint_here (event_child->stop_pc)
&& gdb_condition_true_at_breakpoint (event_child->stop_pc)
&& gdb_no_commands_at_breakpoint (event_child->stop_pc))
@@ -3460,7 +3537,7 @@ suspend_and_send_sigstop_callback (struct inferior_list_entry *entry,
if (lwp == except)
return 0;
- lwp->suspended++;
+ lwp_suspended_inc (lwp);
return send_sigstop_callback (entry, except);
}
@@ -3562,7 +3639,12 @@ stuck_in_jump_pad_callback (struct inferior_list_entry *entry, void *data)
struct thread_info *thread = (struct thread_info *) entry;
struct lwp_info *lwp = get_thread_lwp (thread);
- gdb_assert (lwp->suspended == 0);
+ if (lwp->suspended != 0)
+ {
+ internal_error (__FILE__, __LINE__,
+ "LWP %ld is suspended, suspended=%d\n",
+ lwpid_of (thread), lwp->suspended);
+ }
gdb_assert (lwp->stopped);
/* Allow debugging the jump pad, gdb_collect, etc.. */
@@ -3581,7 +3663,12 @@ move_out_of_jump_pad_callback (struct inferior_list_entry *entry)
struct lwp_info *lwp = get_thread_lwp (thread);
int *wstat;
- gdb_assert (lwp->suspended == 0);
+ if (lwp->suspended != 0)
+ {
+ internal_error (__FILE__, __LINE__,
+ "LWP %ld is suspended, suspended=%d\n",
+ lwpid_of (thread), lwp->suspended);
+ }
gdb_assert (lwp->stopped);
wstat = lwp->status_pending_p ? &lwp->status_pending : NULL;
@@ -3610,7 +3697,7 @@ move_out_of_jump_pad_callback (struct inferior_list_entry *entry)
linux_resume_one_lwp (lwp, 0, 0, NULL);
}
else
- lwp->suspended++;
+ lwp_suspended_inc (lwp);
}
static int
@@ -3665,6 +3752,24 @@ stop_all_lwps (int suspend, struct lwp_info *except)
}
}
+/* Enqueue one signal in the chain of signals which need to be
+ delivered to this process on next resume. */
+
+static void
+enqueue_pending_signal (struct lwp_info *lwp, int signal, siginfo_t *info)
+{
+ struct pending_signals *p_sig;
+
+ p_sig = xmalloc (sizeof (*p_sig));
+ p_sig->prev = lwp->pending_signals;
+ p_sig->signal = signal;
+ if (info == NULL)
+ memset (&p_sig->info, 0, sizeof (siginfo_t));
+ else
+ memcpy (&p_sig->info, info, sizeof (siginfo_t));
+ lwp->pending_signals = p_sig;
+}
+
/* Resume execution of LWP. If STEP is nonzero, single-step it. If
SIGNAL is nonzero, give it that signal. */
@@ -3906,6 +4011,10 @@ check_ptrace_stopped_lwp_gone (struct lwp_info *lp)
/* Don't assume anything if /proc/PID/status can't be read. */
if (linux_proc_pid_is_trace_stopped_nowarn (lwpid_of (thread)) == 0)
{
+ if (debug_threads)
+ debug_printf ("lwp %ld exited while being resumed\n",
+ lwpid_of (thread));
+
lp->stop_reason = TARGET_STOPPED_BY_NO_REASON;
lp->status_pending_p = 0;
return 1;
@@ -4189,16 +4298,36 @@ static int
start_step_over (struct lwp_info *lwp)
{
struct thread_info *thread = get_lwp_thread (lwp);
+ ptid_t thread_ptid;
struct thread_info *saved_thread;
CORE_ADDR pc;
int step;
+ thread_ptid = ptid_of (thread);
+
if (debug_threads)
debug_printf ("Starting step-over on LWP %ld. Stopping all threads\n",
lwpid_of (thread));
stop_all_lwps (1, lwp);
- gdb_assert (lwp->suspended == 0);
+
+ /* Re-find the LWP as it may have exited. */
+ lwp = find_lwp_pid (thread_ptid);
+ if (lwp == NULL || lwp_is_marked_dead (lwp))
+ {
+ if (debug_threads)
+ debug_printf ("Step-over thread died "
+ "(another thread exited the process?).\n");
+ unstop_all_lwps (1, lwp);
+ return 0;
+ }
+
+ if (lwp->suspended != 0)
+ {
+ internal_error (__FILE__, __LINE__,
+ "LWP %ld suspended=%d\n", lwpid_of (thread),
+ lwp->suspended);
+ }
if (debug_threads)
debug_printf ("Done stopping all threads for step-over.\n");
@@ -4229,7 +4358,19 @@ start_step_over (struct lwp_info *lwp)
current_thread = saved_thread;
- linux_resume_one_lwp (lwp, step, 0, NULL);
+ TRY
+ {
+ linux_resume_one_lwp_throw (lwp, step, 0, NULL);
+ }
+ CATCH (ex, RETURN_MASK_ERROR)
+ {
+ unstop_all_lwps (1, lwp);
+
+ if (!check_ptrace_stopped_lwp_gone (lwp))
+ throw_exception (ex);
+ return 0;
+ }
+ END_CATCH
/* Require next event from this LWP. */
step_over_bkpt = thread->entry.id;
@@ -4270,6 +4411,39 @@ finish_step_over (struct lwp_info *lwp)
return 0;
}
+/* If there's a step over in progress, wait until all threads stop
+ (that is, until the stepping thread finishes its step), and
+ unsuspend all lwps. The stepping thread ends with its status
+ pending, which is processed later when we get back to processing
+ events. */
+
+static void
+complete_ongoing_step_over (void)
+{
+ if (!ptid_equal (step_over_bkpt, null_ptid))
+ {
+ struct lwp_info *lwp;
+ int wstat;
+ int ret;
+
+ if (debug_threads)
+ debug_printf ("detach: step over in progress, finish it first\n");
+
+ /* Passing NULL_PTID as filter indicates we want all events to
+ be left pending. Eventually this returns when there are no
+ unwaited-for children left. */
+ ret = linux_wait_for_event_filtered (minus_one_ptid, null_ptid,
+ &wstat, __WALL);
+ gdb_assert (ret == -1);
+
+ lwp = find_lwp_pid (step_over_bkpt);
+ if (lwp != NULL)
+ finish_step_over (lwp);
+ step_over_bkpt = null_ptid;
+ unsuspend_all_lwps (lwp);
+ }
+}
+
/* This function is called once per thread. We check the thread's resume
request, which will tell us whether to resume, step, or leave the thread
stopped; and what signal, if any, it should be sent.
@@ -4344,13 +4518,16 @@ linux_resume_one_thread (struct inferior_list_entry *entry, void *arg)
}
/* If this thread which is about to be resumed has a pending status,
- then don't resume any threads - we can just report the pending
- status. Make sure to queue any signals that would otherwise be
- sent. In all-stop mode, we do this decision based on if *any*
- thread has a pending status. If there's a thread that needs the
- step-over-breakpoint dance, then don't resume any other thread
- but that particular one. */
- leave_pending = (lwp->status_pending_p || leave_all_stopped);
+ then don't resume it - we can just report the pending status.
+ Likewise if it is suspended, because e.g., another thread is
+ stepping past a breakpoint. Make sure to queue any signals that
+ would otherwise be sent. In all-stop mode, we do this decision
+ based on if *any* thread has a pending status. If there's a
+ thread that needs the step-over-breakpoint dance, then don't
+ resume any other thread but that particular one. */
+ leave_pending = (lwp->suspended
+ || lwp->status_pending_p
+ || leave_all_stopped);
if (!leave_pending)
{
@@ -4533,7 +4710,23 @@ proceed_one_lwp (struct inferior_list_entry *entry, void *except)
send_sigstop (lwp);
}
- step = thread->last_resume_kind == resume_step;
+ if (thread->last_resume_kind == resume_step)
+ {
+ if (debug_threads)
+ debug_printf (" stepping LWP %ld, client wants it stepping\n",
+ lwpid_of (thread));
+ step = 1;
+ }
+ else if (lwp->bp_reinsert != 0)
+ {
+ if (debug_threads)
+ debug_printf (" stepping LWP %ld, reinsert set\n",
+ lwpid_of (thread));
+ step = 1;
+ }
+ else
+ step = 0;
+
linux_resume_one_lwp (lwp, step, 0, NULL);
return 0;
}
@@ -4547,8 +4740,7 @@ unsuspend_and_proceed_one_lwp (struct inferior_list_entry *entry, void *except)
if (lwp == except)
return 0;
- lwp->suspended--;
- gdb_assert (lwp->suspended >= 0);
+ lwp_suspended_decr (lwp);
return proceed_one_lwp (entry, except);
}
@@ -4569,19 +4761,22 @@ proceed_all_lwps (void)
if (supports_breakpoints ())
{
- need_step_over
- = (struct thread_info *) find_inferior (&all_threads,
- need_step_over_p, NULL);
-
- if (need_step_over != NULL)
+ while (1)
{
+ need_step_over
+ = (struct thread_info *) find_inferior (&all_threads,
+ need_step_over_p, NULL);
+
+ if (need_step_over == NULL)
+ break;
+
if (debug_threads)
debug_printf ("proceed_all_lwps: found "
"thread %ld needing a step-over\n",
lwpid_of (need_step_over));
- start_step_over (get_thread_lwp (need_step_over));
- return;
+ if (start_step_over (get_thread_lwp (need_step_over)))
+ return;
}
}
@@ -131,7 +131,10 @@ enum target_stop_reason
TARGET_STOPPED_BY_HW_BREAKPOINT,
/* Stopped by a watchpoint. */
- TARGET_STOPPED_BY_WATCHPOINT
+ TARGET_STOPPED_BY_WATCHPOINT,
+
+ /* Stopped by a single step finishing. */
+ TARGET_STOPPED_BY_SINGLE_STEP
};
/* Prototypes */
new file mode 100644
@@ -0,0 +1,129 @@
+/* This testcase is part of GDB, the GNU debugger.
+
+ Copyright 2015 Free Software Foundation, Inc.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see <http://www.gnu.org/licenses/>. */
+
+#include <assert.h>
+#include <pthread.h>
+#include <unistd.h>
+#include <stdio.h>
+#include <sys/types.h>
+#include <sys/wait.h>
+#include <stdlib.h>
+
+/* Number of threads. Each thread continuously spawns a fork and wait
+ for it. If we have another thread continuously start a step over,
+ gdbserver should end up finding new forks while suspending
+ threads. */
+#define NTHREADS 10
+
+pthread_t threads[NTHREADS];
+
+pthread_barrier_t barrier;
+
+#define NFORKS 10
+
+/* Used to create a conditional breakpoint that always fails. */
+volatile int zero;
+
+static void *
+thread_forks (void *arg)
+{
+ int i;
+
+ pthread_barrier_wait (&barrier);
+
+ for (i = 0; i < NFORKS; i++)
+ {
+ pid_t pid;
+
+ pid = fork ();
+
+ if (pid > 0)
+ {
+ int status;
+
+ /* Parent. */
+ pid = waitpid (pid, &status, 0);
+ if (pid == -1)
+ {
+ perror ("wait");
+ exit (1);
+ }
+
+ if (!WIFEXITED (status))
+ {
+ printf ("Unexpected wait status 0x%x from child %d\n",
+ status, pid);
+ }
+ }
+ else if (pid == 0)
+ {
+ /* Child. */
+ exit (0);
+ }
+ else
+ {
+ perror ("fork");
+ exit (1);
+ }
+ }
+}
+
+static void *
+thread_breakpoint (void *arg)
+{
+ pthread_barrier_wait (&barrier);
+
+ while (1)
+ {
+ usleep (1); /* set break here */
+ }
+}
+
+pthread_barrier_t barrier;
+
+int
+main (void)
+{
+ int i;
+ int ret;
+
+ /* Don't run forever. */
+ alarm (180);
+
+ pthread_barrier_init (&barrier, NULL, NTHREADS + 1);
+
+ /* Start the threads that constantly fork. */
+ for (i = 0; i < NTHREADS; i++)
+ {
+ ret = pthread_create (&threads[i], NULL, thread_forks, NULL);
+ assert (ret == 0);
+ }
+
+ /* Start the thread that constantly hit a conditional breakpoint
+ that needs to be stepped over. */
+ ret = pthread_create (&threads[i], NULL, thread_breakpoint, NULL);
+ assert (ret == 0);
+
+ /* Wait for forking to stop. */
+ for (i = 0; i < NTHREADS; i++)
+ {
+ ret = pthread_join (threads[i], NULL);
+ assert (ret == 0);
+ }
+
+ return 0;
+}
new file mode 100644
@@ -0,0 +1,116 @@
+# Copyright (C) 2015 Free Software Foundation, Inc.
+
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+# This test verifies that several threads forking while another thread
+# is constantly stepping over a breakpoint is properly handled.
+
+standard_testfile
+
+set linenum [gdb_get_line_number "set break here"]
+
+proc do_test { detach_on_fork } {
+ global GDBFLAGS
+ global srcfile testfile
+ global decimal gdb_prompt
+ global linenum
+ global is_remote_target
+
+ set saved_gdbflags $GDBFLAGS
+ set GDBFLAGS [concat $GDBFLAGS " -ex \"set non-stop on\""]
+
+ if {[prepare_for_testing "failed to prepare" $testfile $srcfile {debug pthreads}] == -1} {
+ set GDBFLAGS $saved_gdbflags
+ return -1
+ }
+
+ set GDBFLAGS $saved_gdbflags
+
+ if ![runto_main] then {
+ fail "Can't run to main"
+ return 0
+ }
+
+ set is_remote_target [gdb_is_target_remote]
+
+ gdb_test_no_output "set detach-on-fork $detach_on_fork"
+
+ gdb_test "break $linenum if zero == 1" \
+ "Breakpoint .*" \
+ "set breakpoint that evals false"
+
+ set test "continue &"
+ gdb_test_multiple $test $test {
+ -re "$gdb_prompt " {
+ pass $test
+ }
+ }
+
+ set fork_count 0
+ set ok 0
+
+ set test "inferior 1 exited"
+ gdb_test_multiple "" $test {
+ -re "Inferior 1 \(\[^\r\n\]+\) exited normally" {
+ pass $test
+ set ok 1
+ }
+ -re "Inferior $decimal \(\[^\r\n\]+\) exited normally" {
+ incr fork_count
+ if {$fork_count <= 100} {
+ exp_continue
+ } else {
+ fail "$test (too many forks)"
+ }
+ }
+
+ -re "$gdb_prompt " {
+ # Several errors end up at the top level, and printing the
+ # prompt.
+ if {!$is_remote_target} {
+ setup_kfail "gdb/18749" "*-*-linux*"
+ }
+ fail $test
+ }
+ -re "Cannot access memory" {
+ if {!$is_remote_target} {
+ setup_kfail "gdb/18749" "*-*-linux*"
+ }
+ fail $test
+ }
+ }
+
+ if {!$ok} {
+ # No use testing further.
+ return
+ }
+
+ gdb_test "info threads" "No threads\." \
+ "no threads left"
+
+ gdb_test "info inferiors" \
+ "Num\[ \t\]+Description\[ \t\]+Executable\[ \t\]+\r\n\\* 1 \[^\r\n\]+" \
+ "only inferior 1 left"
+}
+
+foreach detach_on_fork {"on" "off"} {
+ with_test_prefix "detach-on-fork=$detach_on_fork" {
+ do_test $detach_on_fork
+ }
+
+ # The test passes with detach-on-fork off, but gdb seems to slow
+ # down quadratically as inferiors are created, and then the test
+ # takes annoyingly long to complete...
+ break
+}