From patchwork Thu Mar 27 01:51:26 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kyle McMartin X-Patchwork-Id: 306 Return-Path: X-Original-To: siddhesh@wilcox.dreamhost.com Delivered-To: siddhesh@wilcox.dreamhost.com Received: from homiemail-mx23.g.dreamhost.com (caibbdcaabja.dreamhost.com [208.113.200.190]) by wilcox.dreamhost.com (Postfix) with ESMTP id C254636031A for ; Wed, 26 Mar 2014 18:51:38 -0700 (PDT) Received: by homiemail-mx23.g.dreamhost.com (Postfix, from userid 14314964) id 726826203833A; Wed, 26 Mar 2014 18:51:38 -0700 (PDT) X-Original-To: gdb@patchwork.siddhesh.in Delivered-To: x14314964@homiemail-mx23.g.dreamhost.com Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by homiemail-mx23.g.dreamhost.com (Postfix) with ESMTPS id 504F662038334 for ; Wed, 26 Mar 2014 18:51:38 -0700 (PDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:date:from:to:subject:message-id:mime-version :content-type; q=dns; s=default; b=f6kXVrjKLotA6uO8CNeavQjW3POPk tYoht4oD1AZVY5pp7fD4E45/oMon8XsZGIa/xmGS8ISsorsz3YCWdm3b+DyLEZSq WY0z33vxbRsdX/lygCaZf8Hm3NPw20J3D0s+sR8L2b3e9ZrKvF9ncw79FsoRNHVF 5dC0Ed+Z1kO3Xo= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:date:from:to:subject:message-id:mime-version :content-type; s=default; bh=Nnqy61Um+zF4rIQrsRLsZNEj6EM=; b=G9P BGWKnPYFCuuDH7BJKUk9xHpyxIwJiZKzI15XkjlYOd7pRSCXWl3ISHignG+iHmCt BQN+DMc1wduJX+sxFUkvpCi3O/CuIL19bwy00Ks42CEIOZk601cqknfrKC1Yjz8g 7gUwRqtZCm549jgDe8XVJe+gAe9zzD1oqgNmeka0= Received: (qmail 24053 invoked by alias); 27 Mar 2014 01:51:33 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Delivered-To: mailing list gdb-patches@sourceware.org Received: (qmail 24016 invoked by uid 89); 27 Mar 2014 01:51:31 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.3 required=5.0 tests=AWL, BAYES_00, RP_MATCHES_RCVD, SPF_HELO_PASS, SPF_PASS, UNSUBSCRIBE_BODY autolearn=no version=3.3.2 X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 27 Mar 2014 01:51:29 +0000 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id s2R1pS9K023663 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Wed, 26 Mar 2014 21:51:28 -0400 Received: from redacted.bos.redhat.com ([10.18.17.143]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id s2R1pQ92027229 (version=TLSv1/SSLv3 cipher=AES128-SHA bits=128 verify=NO) for ; Wed, 26 Mar 2014 21:51:27 -0400 Date: Wed, 26 Mar 2014 21:51:26 -0400 From: Kyle McMartin To: gdb-patches@sourceware.org Subject: [PATCHv2] aarch64: detect atomic sequences like other ll/sc architectures Message-ID: <20140327015125.GE3075@redacted.bos.redhat.com> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-IsSubscribed: yes X-DH-Original-To: gdb@patchwork.siddhesh.in Add similar single-stepping over atomic sequences support like other load-locked/store-conditional architectures (alpha, powerpc, arm, etc.) do. Verified the decode_masked_match, and decode_bcond works against the atomic sequences used in the Linux kernel atomic.h, and also gcc libatomic. Thanks to Richard Henderson for feedback on my initial attempt at this patch, and for the feedback from gdb-patches, which I hope I've addressed... 2014-03-26 Kyle McMartin gdb: * aarch64-tdep.c (aarch64_deal_with_atomic_sequence): New function. (aarch64_gdbarch_init): Handle single stepping of atomic sequences with aarch64_deal_with_atomic_sequence. gdb/testsuite: * gdb.arch/aarch64-atomic-inst.c: New file. * gdb.arch/aarch64-atomic-inst.exp: New file. --- a/gdb/aarch64-tdep.c +++ b/gdb/aarch64-tdep.c @@ -2509,6 +2509,83 @@ value_of_aarch64_user_reg (struct frame_info *frame, const void *baton) } +/* Implement the "software_single_step" gdbarch method, needed to + single step through atomic sequences on AArch64. */ + +static int +aarch64_software_single_step (struct frame_info *frame) +{ + struct gdbarch *gdbarch = get_frame_arch (frame); + struct address_space *aspace = get_frame_address_space (frame); + enum bfd_endian byte_order = gdbarch_byte_order (gdbarch); + const int insn_size = 4; + const int atomic_sequence_length = 16; /* Instruction sequence length. */ + CORE_ADDR pc = get_frame_pc (frame); + CORE_ADDR breaks[2] = { -1, -1 }; + CORE_ADDR loc = pc; + CORE_ADDR closing_insn = 0; + uint32_t insn = read_memory_unsigned_integer (loc, insn_size, byte_order); + int index; + int insn_count; + int bc_insn_count = 0; /* Conditional branch instruction count. */ + int last_breakpoint = 0; /* Defaults to 0 (no breakpoints placed). */ + + /* Look for a Load Exclusive instruction which begins the sequence. */ + if (!decode_masked_match (insn, 0x3fc00000, 0x08400000)) + return 0; + + for (insn_count = 0; insn_count < atomic_sequence_length; ++insn_count) + { + int32_t offset; + unsigned cond; + + loc += insn_size; + insn = read_memory_unsigned_integer (loc, insn_size, byte_order); + + /* Check if the instruction is a conditional branch. */ + if (decode_bcond (loc, insn, &cond, &offset)) + { + + if (bc_insn_count >= 1) + return 0; + + /* It is, so we'll try to set a breakpoint at the destination. */ + breaks[1] = loc + offset; + + bc_insn_count++; + last_breakpoint++; + } + + /* Look for the Store Exclusive which closes the atomic sequence. */ + if (decode_masked_match (insn, 0x3fc00000, 0x08000000)) + { + closing_insn = loc; + break; + } + } + + /* We didn't find a closing Store Exclusive instruction, fall back. */ + if (!closing_insn) + return 0; + + /* Insert breakpoint after the end of the atomic sequence. */ + breaks[0] = loc + insn_size; + + /* Check for duplicated breakpoints, and also check that the second + breakpoint is not within the atomic sequence. */ + if (last_breakpoint + && (breaks[1] == breaks[0] + || (breaks[1] >= pc && breaks[1] <= closing_insn))) + last_breakpoint = 0; + + /* Insert the breakpoint at the end of the sequence, and one at the + destination of the conditional branch, if it exists. */ + for (index = 0; index <= last_breakpoint; index++) + insert_single_step_breakpoint (gdbarch, aspace, breaks[index]); + + return 1; +} + /* Initialize the current architecture based on INFO. If possible, re-use an architecture from ARCHES, which is a list of architectures already created during this debugging session. @@ -2624,6 +2701,7 @@ aarch64_gdbarch_init (struct gdbarch_info info, struct gdbarch_list *arches) set_gdbarch_breakpoint_from_pc (gdbarch, aarch64_breakpoint_from_pc); set_gdbarch_cannot_step_breakpoint (gdbarch, 1); set_gdbarch_have_nonsteppable_watchpoint (gdbarch, 1); + set_gdbarch_software_single_step (gdbarch, aarch64_software_single_step); /* Information about registers, etc. */ set_gdbarch_sp_regnum (gdbarch, AARCH64_SP_REGNUM); --- /dev/null +++ b/gdb/testsuite/gdb.arch/aarch64-atomic-inst.c @@ -0,0 +1,50 @@ +/* This file is part of GDB, the GNU debugger. + + Copyright 2008-2014 Free Software Foundation, Inc. + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ + +#include + +int main() +{ + unsigned long tmp, cond; + unsigned long dword = 0; + + /* Test that we can step over ldxr/stxr. This sequence should step from + ldxr to the following __asm __volatile. */ + __asm __volatile ("1: ldxr %0,%2\n" \ + " cmp %0,#1\n" \ + " b.eq out\n" \ + " add %0,%0,1\n" \ + " stxr %w1,%0,%2\n" \ + " cbnz %w1,1b" \ + : "=&r" (tmp), "=&r" (cond), "+Q" (dword) \ + : : "memory"); + + /* This sequence should take the conditional branch and step from ldxr + to the return dword line. */ + __asm __volatile ("1: ldxr %0,%2\n" \ + " cmp %0,#1\n" \ + " b.eq out\n" \ + " add %0,%0,1\n" \ + " stxr %w1,%0,%2\n" \ + " cbnz %w1,1b\n" \ + : "=&r" (tmp), "=&r" (cond), "+Q" (dword) \ + : : "memory"); + + dword = -1; +__asm __volatile ("out:\n"); + return dword; +} --- /dev/null +++ b/gdb/testsuite/gdb.arch/aarch64-atomic-inst.exp @@ -0,0 +1,58 @@ +# Copyright 2008-2014 Free Software Foundation, Inc. +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. +# +# This file is part of the gdb testsuite. + +# Test single stepping through atomic sequences beginning with +# a ldxr instruction and ending with a stxr instruction. + +if {![istarget "aarch64*"]} { + verbose "Skipping testing of aarch64 single stepping over atomic sequences." + return +} + +set testfile "aarch64-atomic-inst" +set srcfile ${testfile}.c +set binfile ${objdir}/${subdir}/${testfile} +set compile_flags {debug quiet} + +if { [gdb_compile "${srcdir}/${subdir}/${srcfile}" "${binfile}" executable $compile_flags] != "" } { + unsupported "Testcase compile failed." + return -1 +} + +gdb_exit +gdb_start +gdb_reinitialize_dir $srcdir/$subdir +gdb_load ${binfile} + +if ![runto_main] then { + perror "Couldn't run to breakpoint" + continue +} + +set bp1 [gdb_get_line_number "ldxr"] +gdb_breakpoint "$bp1" "Breakpoint $decimal at $hex" \ + "Set the breakpoint at the start of the sequence" + +gdb_test continue "Continuing.*Breakpoint $decimal.*" \ + "Continue until breakpoint" + +gdb_test next ".*__asm __volatile.*" \ + "Step through the ldxr/stxr sequence" + +gdb_test next ".*return dword.*" \ + "Stepped through sequence through conditional branch"