[PATCHv2] aarch64: detect atomic sequences like other ll/sc architectures

Message ID 20140327015125.GE3075@redacted.bos.redhat.com
State Superseded
Headers

Commit Message

Kyle McMartin March 27, 2014, 1:51 a.m. UTC
  Add similar single-stepping over atomic sequences support like other
load-locked/store-conditional architectures (alpha, powerpc, arm, etc.)
do. Verified the decode_masked_match, and decode_bcond works against the
atomic sequences used in the Linux kernel atomic.h, and also gcc
libatomic. Thanks to Richard Henderson for feedback on my initial
attempt at this patch, and for the feedback from gdb-patches, which I
hope I've addressed...

2014-03-26  Kyle McMartin  <kyle@redhat.com>

gdb:
	* aarch64-tdep.c (aarch64_deal_with_atomic_sequence): New function.
	(aarch64_gdbarch_init): Handle single stepping of atomic sequences
	with aarch64_deal_with_atomic_sequence.

gdb/testsuite:
	* gdb.arch/aarch64-atomic-inst.c: New file.
	* gdb.arch/aarch64-atomic-inst.exp: New file.
  

Comments

Joel Brobecker March 27, 2014, 1:07 p.m. UTC | #1
Hi Kyle,

> 2014-03-26  Kyle McMartin  <kyle@redhat.com>
> 
> gdb:
> 	* aarch64-tdep.c (aarch64_deal_with_atomic_sequence): New function.
> 	(aarch64_gdbarch_init): Handle single stepping of atomic sequences
> 	with aarch64_deal_with_atomic_sequence.
>
> 
> gdb/testsuite:
> 	* gdb.arch/aarch64-atomic-inst.c: New file.
> 	* gdb.arch/aarch64-atomic-inst.exp: New file.

A few comments on the testcase. I have no further comments on the code
itself.

> +#include <stdio.h>

Do you really need stdio.h, here? You do not seem to be making
any function call in your function, so I do not see why it would
be needed. On the other hand, having a dependency on stdio.h means
that the testcase will not compile on many targets (eg: bare metal).

> +
> +int main()

Can you use "(void)" instead of "()"?

> +{
> +  unsigned long tmp, cond;
> +  unsigned long dword = 0;
> +
> +  /* Test that we can step over ldxr/stxr. This sequence should step from
> +     ldxr to the following __asm __volatile.  */
> +  __asm __volatile ("1:     ldxr    %0,%2\n"                             \
> +                    "       cmp     %0,#1\n"                             \
> +                    "       b.eq    out\n"                               \
> +                    "       add     %0,%0,1\n"                           \
> +                    "       stxr    %w1,%0,%2\n"                         \
> +                    "       cbnz    %w1,1b"                              \
> +                    : "=&r" (tmp), "=&r" (cond), "+Q" (dword)            \
> +                    : : "memory");
> +
> +  /* This sequence should take the conditional branch and step from ldxr
> +     to the return dword line.  */
> +  __asm __volatile ("1:     ldxr    %0,%2\n"                             \
> +                    "       cmp     %0,#1\n"                             \
> +                    "       b.eq    out\n"                               \
> +                    "       add     %0,%0,1\n"                           \
> +                    "       stxr    %w1,%0,%2\n"                         \
> +                    "       cbnz    %w1,1b\n"                            \
> +                    : "=&r" (tmp), "=&r" (cond), "+Q" (dword)            \
> +                    : : "memory");
> +
> +  dword = -1;
> +__asm __volatile ("out:\n");
> +  return dword;
> +}
> --- /dev/null
> +++ b/gdb/testsuite/gdb.arch/aarch64-atomic-inst.exp
> @@ -0,0 +1,58 @@
> +# Copyright 2008-2014 Free Software Foundation, Inc.
> +#
> +# This program is free software; you can redistribute it and/or modify
> +# it under the terms of the GNU General Public License as published by
> +# the Free Software Foundation; either version 3 of the License, or
> +# (at your option) any later version.
> +#
> +# This program is distributed in the hope that it will be useful,
> +# but WITHOUT ANY WARRANTY; without even the implied warranty of
> +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> +# GNU General Public License for more details.
> +#
> +# You should have received a copy of the GNU General Public License
> +# along with this program; if not, write to the Free Software
> +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
> +#
> +# This file is part of the gdb testsuite.
> +
> +# Test single stepping through atomic sequences beginning with
> +# a ldxr instruction and ending with a stxr instruction.
> +
> +if {![istarget "aarch64*"]} {
> +    verbose "Skipping testing of aarch64 single stepping over atomic sequences."
> +    return
> +}
> +
> +set testfile "aarch64-atomic-inst"
> +set srcfile ${testfile}.c
> +set binfile ${objdir}/${subdir}/${testfile}
> +set compile_flags {debug quiet}
> +
> +if { [gdb_compile "${srcdir}/${subdir}/${srcfile}" "${binfile}" executable $compile_flags] != "" } {
> +    unsupported "Testcase compile failed."
> +    return -1
> +}
> +gdb_exit
> +gdb_start
> +gdb_reinitialize_dir $srcdir/$subdir
> +gdb_load ${binfile}

Can you use standard_testfile and prepare_for_testing? See our testcase
cookbook page at:
https://sourceware.org/gdb/wiki/GDBTestcaseCookbook

It should replace all of the above after the "if istarget"...

> +if ![runto_main] then {
> +    perror "Couldn't run to breakpoint"
> +    continue

Also according to the cookbook, use:

    untested "could not run to main"
    return -1

IIRC, with tcl, there isn't a huge distinction in this case between
continue and return -1, but might as well follow the cookbook.

> +gdb_test continue "Continuing.*Breakpoint $decimal.*" \
> +  "Continue until breakpoint"
> +
> +gdb_test next ".*__asm __volatile.*" \
> +  "Step through the ldxr/stxr sequence"
> +
> +gdb_test next ".*return dword.*" \
> +  "Stepped through sequence through conditional branch"

Can you put the "continue"/"next" inside double-quotes.
It looks like it's all the same to tcl, but I think it'll make
things more consistent and also allow editors to (syntax)-highlight
those as strings...

Thanks,
  
Marcus Shawcroft March 27, 2014, 2:07 p.m. UTC | #2
Hi,


On 27 March 2014 01:51, Kyle McMartin <kmcmarti@redhat.com> wrote:

> +  /* Look for a Load Exclusive instruction which begins the sequence.  */
> +  if (!decode_masked_match (insn, 0x3fc00000, 0x08400000))
> +    return 0;

Are you sure these masks and patterns are accurate? Looks to me that
this excludes many of the load exclusive instructions and includes
part of the unallocated encoding space. There are several different
encodings to match here covering ld[a]xr{b,h,} and ld[a]xp.  The masks
and patterns will be something like:

0xbfff7c00 0x085f7c00
0xbfff7c00 0x885f7c00
0xbfff0000 0x887f0000

> +      if (decode_masked_match (insn, 0x3fc00000, 0x08000000))

This also looks wrong.

> +  /* Test that we can step over ldxr/stxr. This sequence should step from
> +     ldxr to the following __asm __volatile.  */
> +  __asm __volatile ("1:     ldxr    %0,%2\n"                             \
> +                    "       cmp     %0,#1\n"                             \
> +                    "       b.eq    out\n"                               \
> +                    "       add     %0,%0,1\n"                           \
> +                    "       stxr    %w1,%0,%2\n"                         \
> +                    "       cbnz    %w1,1b"                              \
> +                    : "=&r" (tmp), "=&r" (cond), "+Q" (dword)            \
> +                    : : "memory");
> +
> +  /* This sequence should take the conditional branch and step from ldxr
> +     to the return dword line.  */
> +  __asm __volatile ("1:     ldxr    %0,%2\n"                             \
> +                    "       cmp     %0,#1\n"                             \
> +                    "       b.eq    out\n"                               \
> +                    "       add     %0,%0,1\n"                           \
> +                    "       stxr    %w1,%0,%2\n"                         \
> +                    "       cbnz    %w1,1b\n"                            \
> +                    : "=&r" (tmp), "=&r" (cond), "+Q" (dword)            \
> +                    : : "memory");
> +
> +  dword = -1;
> +__asm __volatile ("out:\n");
> +  return dword;
> +}

How about testing at least one instruction from each group of load
store exclusives?

Cheers
/Marcus
  
Kyle McMartin March 27, 2014, 3:40 p.m. UTC | #3
On Thu, Mar 27, 2014 at 02:07:35PM +0000, Marcus Shawcroft wrote:
> Are you sure these masks and patterns are accurate? Looks to me that
> this excludes many of the load exclusive instructions and includes
> part of the unallocated encoding space. There are several different
> encodings to match here covering ld[a]xr{b,h,} and ld[a]xp.  The masks
> and patterns will be something like:
> 
> 0xbfff7c00 0x085f7c00
> 0xbfff7c00 0x885f7c00
> 0xbfff0000 0x887f0000
> 
> > +      if (decode_masked_match (insn, 0x3fc00000, 0x08000000))
> 
> This also looks wrong.
> 

Eh... I tested all 24 possible ldxr/stxr opcodes...
https://github.com/jkkm/aarch64-ldxr-stxr-match/blob/master/example.txt
Maybe I'm missing something, but I think it's alright.

> > +  /* Test that we can step over ldxr/stxr. This sequence should step from
> > +     ldxr to the following __asm __volatile.  */
> > +  __asm __volatile ("1:     ldxr    %0,%2\n"                             \
> > +                    "       cmp     %0,#1\n"                             \
> > +                    "       b.eq    out\n"                               \
> > +                    "       add     %0,%0,1\n"                           \
> > +                    "       stxr    %w1,%0,%2\n"                         \
> > +                    "       cbnz    %w1,1b"                              \
> > +                    : "=&r" (tmp), "=&r" (cond), "+Q" (dword)            \
> > +                    : : "memory");
> > +
> > +  /* This sequence should take the conditional branch and step from ldxr
> > +     to the return dword line.  */
> > +  __asm __volatile ("1:     ldxr    %0,%2\n"                             \
> > +                    "       cmp     %0,#1\n"                             \
> > +                    "       b.eq    out\n"                               \
> > +                    "       add     %0,%0,1\n"                           \
> > +                    "       stxr    %w1,%0,%2\n"                         \
> > +                    "       cbnz    %w1,1b\n"                            \
> > +                    : "=&r" (tmp), "=&r" (cond), "+Q" (dword)            \
> > +                    : : "memory");
> > +
> > +  dword = -1;
> > +__asm __volatile ("out:\n");
> > +  return dword;
> > +}
> 
> How about testing at least one instruction from each group of load
> store exclusives?
> 

I'm just following what PPC64 did... I think the only thing they really
want to check is that it correctly steps over the sequence (the original
case didn't bother testing the conditional branch path.) I could add
further cases, but it seems a bit pointless... but if you're going to
block committing on that basis I guess I can cook it up.

regards, Kyle

> Cheers
> /Marcus
  
Kyle McMartin March 27, 2014, 3:45 p.m. UTC | #4
On Thu, Mar 27, 2014 at 06:07:14AM -0700, Joel Brobecker wrote:
> A few comments on the testcase. I have no further comments on the code
> itself.
> 
> > +#include <stdio.h>
> 
> Do you really need stdio.h, here? You do not seem to be making
> any function call in your function, so I do not see why it would
> be needed. On the other hand, having a dependency on stdio.h means
> that the testcase will not compile on many targets (eg: bare metal).
> 

Not that I'm aware of, I'd re-added it as I added some printfs for
debugging while writing the testcase and forgot to remove it again. (The
PPC64 one has it for some reason as well, likely the same reason.)

> > +
> > +int main()
> 
> Can you use "(void)" instead of "()"?
> 

Absolutely.

> > +if { [gdb_compile "${srcdir}/${subdir}/${srcfile}" "${binfile}" executable $compile_flags] != "" } {
> > +    unsupported "Testcase compile failed."
> > +    return -1
> > +}
> > +gdb_exit
> > +gdb_start
> > +gdb_reinitialize_dir $srcdir/$subdir
> > +gdb_load ${binfile}
> 
> Can you use standard_testfile and prepare_for_testing? See our testcase
> cookbook page at:
> https://sourceware.org/gdb/wiki/GDBTestcaseCookbook
> 
> It should replace all of the above after the "if istarget"...
> 

OK, testing this out now.

> > +if ![runto_main] then {
> > +    perror "Couldn't run to breakpoint"
> > +    continue
> 
> Also according to the cookbook, use:
> 
>     untested "could not run to main"
>     return -1
> 
> IIRC, with tcl, there isn't a huge distinction in this case between
> continue and return -1, but might as well follow the cookbook.
> 
> > +gdb_test continue "Continuing.*Breakpoint $decimal.*" \
> > +  "Continue until breakpoint"
> > +
> > +gdb_test next ".*__asm __volatile.*" \
> > +  "Step through the ldxr/stxr sequence"
> > +
> > +gdb_test next ".*return dword.*" \
> > +  "Stepped through sequence through conditional branch"
> 
> Can you put the "continue"/"next" inside double-quotes.
> It looks like it's all the same to tcl, but I think it'll make
> things more consistent and also allow editors to (syntax)-highlight
> those as strings...
> 

Will do.

regards, Kyle

> Thanks,
> -- 
> Joel
  
Marcus Shawcroft March 27, 2014, 4:12 p.m. UTC | #5
On 27 March 2014 15:40, Kyle McMartin <kmcmarti@redhat.com> wrote:
> On Thu, Mar 27, 2014 at 02:07:35PM +0000, Marcus Shawcroft wrote:
>> Are you sure these masks and patterns are accurate? Looks to me that
>> this excludes many of the load exclusive instructions and includes
>> part of the unallocated encoding space. There are several different
>> encodings to match here covering ld[a]xr{b,h,} and ld[a]xp.  The masks
>> and patterns will be something like:
>>
>> 0xbfff7c00 0x085f7c00
>> 0xbfff7c00 0x885f7c00
>> 0xbfff0000 0x887f0000
>>
>> > +      if (decode_masked_match (insn, 0x3fc00000, 0x08000000))
>>
>> This also looks wrong.
>>
>
> Eh... I tested all 24 possible ldxr/stxr opcodes...

Kyle
You are right, sorry, I can't read the encoding tables.  This mask and
pattern is indeed sensible.

Cheers
/Marcus
  
Kyle McMartin March 27, 2014, 4:37 p.m. UTC | #6
On Thu, Mar 27, 2014 at 04:12:55PM +0000, Marcus Shawcroft wrote:
> >> This also looks wrong.
> >>
> >
> > Eh... I tested all 24 possible ldxr/stxr opcodes...
> 
> Kyle
> You are right, sorry, I can't read the encoding tables.  This mask and
> pattern is indeed sensible.
> 

No worries at all, I'm glad you made me double check. :)

--Kyle
  

Patch

--- a/gdb/aarch64-tdep.c
+++ b/gdb/aarch64-tdep.c
@@ -2509,6 +2509,83 @@  value_of_aarch64_user_reg (struct frame_info *frame, const void *baton)
 }
 
 
+/* Implement the "software_single_step" gdbarch method, needed to
+   single step through atomic sequences on AArch64.  */
+
+static int
+aarch64_software_single_step (struct frame_info *frame)
+{
+  struct gdbarch *gdbarch = get_frame_arch (frame);
+  struct address_space *aspace = get_frame_address_space (frame);
+  enum bfd_endian byte_order = gdbarch_byte_order (gdbarch);
+  const int insn_size = 4;
+  const int atomic_sequence_length = 16; /* Instruction sequence length.  */
+  CORE_ADDR pc = get_frame_pc (frame);
+  CORE_ADDR breaks[2] = { -1, -1 };
+  CORE_ADDR loc = pc;
+  CORE_ADDR closing_insn = 0;
+  uint32_t insn = read_memory_unsigned_integer (loc, insn_size, byte_order);
+  int index;
+  int insn_count;
+  int bc_insn_count = 0; /* Conditional branch instruction count.  */
+  int last_breakpoint = 0; /* Defaults to 0 (no breakpoints placed).  */
+
+  /* Look for a Load Exclusive instruction which begins the sequence.  */
+  if (!decode_masked_match (insn, 0x3fc00000, 0x08400000))
+    return 0;
+
+  for (insn_count = 0; insn_count < atomic_sequence_length; ++insn_count)
+    {
+      int32_t offset;
+      unsigned cond;
+
+      loc += insn_size;
+      insn = read_memory_unsigned_integer (loc, insn_size, byte_order);
+
+      /* Check if the instruction is a conditional branch.  */
+      if (decode_bcond (loc, insn, &cond, &offset))
+	{
+
+	  if (bc_insn_count >= 1)
+	    return 0;
+
+	  /* It is, so we'll try to set a breakpoint at the destination.  */
+	  breaks[1] = loc + offset;
+
+	  bc_insn_count++;
+	  last_breakpoint++;
+	}
+
+      /* Look for the Store Exclusive which closes the atomic sequence.  */
+      if (decode_masked_match (insn, 0x3fc00000, 0x08000000))
+	{
+          closing_insn = loc;
+	  break;
+	}
+    }
+
+  /* We didn't find a closing Store Exclusive instruction, fall back.  */
+  if (!closing_insn)
+    return 0;
+
+  /* Insert breakpoint after the end of the atomic sequence.  */
+  breaks[0] = loc + insn_size;
+
+  /* Check for duplicated breakpoints, and also check that the second
+     breakpoint is not within the atomic sequence.  */
+  if (last_breakpoint
+      && (breaks[1] == breaks[0]
+	  || (breaks[1] >= pc && breaks[1] <= closing_insn)))
+    last_breakpoint = 0;
+
+  /* Insert the breakpoint at the end of the sequence, and one at the
+     destination of the conditional branch, if it exists.  */
+  for (index = 0; index <= last_breakpoint; index++)
+    insert_single_step_breakpoint (gdbarch, aspace, breaks[index]);
+
+  return 1;
+}
+
 /* Initialize the current architecture based on INFO.  If possible,
    re-use an architecture from ARCHES, which is a list of
    architectures already created during this debugging session.
@@ -2624,6 +2701,7 @@  aarch64_gdbarch_init (struct gdbarch_info info, struct gdbarch_list *arches)
   set_gdbarch_breakpoint_from_pc (gdbarch, aarch64_breakpoint_from_pc);
   set_gdbarch_cannot_step_breakpoint (gdbarch, 1);
   set_gdbarch_have_nonsteppable_watchpoint (gdbarch, 1);
+  set_gdbarch_software_single_step (gdbarch, aarch64_software_single_step);
 
   /* Information about registers, etc.  */
   set_gdbarch_sp_regnum (gdbarch, AARCH64_SP_REGNUM);
--- /dev/null
+++ b/gdb/testsuite/gdb.arch/aarch64-atomic-inst.c
@@ -0,0 +1,50 @@ 
+/* This file is part of GDB, the GNU debugger.
+
+   Copyright 2008-2014 Free Software Foundation, Inc.
+
+   This program is free software; you can redistribute it and/or modify
+   it under the terms of the GNU General Public License as published by
+   the Free Software Foundation; either version 3 of the License, or
+   (at your option) any later version.
+
+   This program is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+   GNU General Public License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+
+#include <stdio.h>
+
+int main()
+{
+  unsigned long tmp, cond;
+  unsigned long dword = 0;
+
+  /* Test that we can step over ldxr/stxr. This sequence should step from
+     ldxr to the following __asm __volatile.  */
+  __asm __volatile ("1:     ldxr    %0,%2\n"                             \
+                    "       cmp     %0,#1\n"                             \
+                    "       b.eq    out\n"                               \
+                    "       add     %0,%0,1\n"                           \
+                    "       stxr    %w1,%0,%2\n"                         \
+                    "       cbnz    %w1,1b"                              \
+                    : "=&r" (tmp), "=&r" (cond), "+Q" (dword)            \
+                    : : "memory");
+
+  /* This sequence should take the conditional branch and step from ldxr
+     to the return dword line.  */
+  __asm __volatile ("1:     ldxr    %0,%2\n"                             \
+                    "       cmp     %0,#1\n"                             \
+                    "       b.eq    out\n"                               \
+                    "       add     %0,%0,1\n"                           \
+                    "       stxr    %w1,%0,%2\n"                         \
+                    "       cbnz    %w1,1b\n"                            \
+                    : "=&r" (tmp), "=&r" (cond), "+Q" (dword)            \
+                    : : "memory");
+
+  dword = -1;
+__asm __volatile ("out:\n");
+  return dword;
+}
--- /dev/null
+++ b/gdb/testsuite/gdb.arch/aarch64-atomic-inst.exp
@@ -0,0 +1,58 @@ 
+# Copyright 2008-2014 Free Software Foundation, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 3 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+#
+# This file is part of the gdb testsuite.
+
+# Test single stepping through atomic sequences beginning with
+# a ldxr instruction and ending with a stxr instruction.
+
+if {![istarget "aarch64*"]} {
+    verbose "Skipping testing of aarch64 single stepping over atomic sequences."
+    return
+}
+
+set testfile "aarch64-atomic-inst"
+set srcfile ${testfile}.c
+set binfile ${objdir}/${subdir}/${testfile}
+set compile_flags {debug quiet}
+
+if { [gdb_compile "${srcdir}/${subdir}/${srcfile}" "${binfile}" executable $compile_flags] != "" } {
+    unsupported "Testcase compile failed."
+    return -1
+}
+
+gdb_exit
+gdb_start
+gdb_reinitialize_dir $srcdir/$subdir
+gdb_load ${binfile}
+
+if ![runto_main] then {
+    perror "Couldn't run to breakpoint"
+    continue
+}
+
+set bp1 [gdb_get_line_number "ldxr"]
+gdb_breakpoint "$bp1" "Breakpoint $decimal at $hex" \
+  "Set the breakpoint at the start of the sequence"
+
+gdb_test continue "Continuing.*Breakpoint $decimal.*" \
+  "Continue until breakpoint"
+
+gdb_test next ".*__asm __volatile.*" \
+  "Step through the ldxr/stxr sequence"
+
+gdb_test next ".*return dword.*" \
+  "Stepped through sequence through conditional branch"