From patchwork Sun Jan 27 22:51:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Collin May X-Patchwork-Id: 31225 Received: (qmail 8604 invoked by alias); 27 Jan 2019 22:52:22 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Delivered-To: mailing list gdb-patches@sourceware.org Received: (qmail 8591 invoked by uid 89); 27 Jan 2019 22:52:22 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-26.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.2 spammy=133, appreciate, HContent-Transfer-Encoding:8bit, HX-HELO:sk:mail-pf X-HELO: mail-pf1-f193.google.com Received: from mail-pf1-f193.google.com (HELO mail-pf1-f193.google.com) (209.85.210.193) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Sun, 27 Jan 2019 22:52:20 +0000 Received: by mail-pf1-f193.google.com with SMTP id q1so7127987pfi.5 for ; Sun, 27 Jan 2019 14:52:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=collinswebsite-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=zsTVTW0kTRtcHwgSrEw+GSM3no/CnXFGend/3lBWFZ4=; b=tma/4ZnzHplXKNnmRYI8RXR+8NnH76GCTk8azHOhUunNF19MfPx9iEa07HedgQ2qbJ N87S/roNZhFP5BWEa1XI4SD1xTTIB8qfOZKwXNQAAJkqFIYzi7D/q+0Lj8MlrHXUOmZb lVrcchweiR0U84BlB4D757eqcdNvF6oVnjay5WRH3Eil5yGhSTsYGnEGYEm4QL5FmPl0 Vc8mc+xwux0rEDcXoVMluPPsCKVYPFDOlsarYEg8R4udTSFNw2Kxzy1NvnQ1dLhtfn+5 Q4z6K0e5Cr/b/I1qnlQIFjwXbwfM2pBQknhkXZgzfXkXGyf2bSUMh6lar1loeZCc2yxf lqQQ== Return-Path: Received: from Squash.lan (c-76-121-145-210.hsd1.wa.comcast.net. [76.121.145.210]) by smtp.googlemail.com with ESMTPSA id h9sm41368817pgd.53.2019.01.27.14.52.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Sun, 27 Jan 2019 14:52:17 -0800 (PST) From: Collin May To: gdb-patches@sourceware.org Cc: Collin May Subject: [PATCH, RFC] AArch64: Implement software single step Date: Sun, 27 Jan 2019 14:51:58 -0800 Message-Id: <20190127225157.16422-1-collin@collinswebsite.com> MIME-Version: 1.0 This moves the functionality that was previously in the aarch64_software_single_step function to a new aarch64_deal_with_atomic_sequence function, and if an atomic sequence is not detected, it will attempt to predict the next location of the program counter by detecting branch instructions and predicting their outcomes, much like how arm_get_next_pcs works. Although AArch64 platforms typically support hardware single step, some kernels do not. This functionality is useful when interacting with remote targets written to run under such kernels (and to avoid sending them 's' operations in vCont when they do not advertise support for the 's' operation). I've noticed that the arm_software_single_step functionality is largely delegated to an "arm_get_next_pcs" system that seems to be shared with gdbserver. Since, as far as I can tell, gdbserver on AArch64 is only intended to run under kernels that support hardware single step, I don't think there's any need for this code to be shared with gdbserver. Finally, one might notice that I haven't written tests for this functionality. I'm not familiar with gdb's testsuite and would appreciate feedback on how to go about writing tests for this. I have manually tested that this is working correctly on a platform that does not support hardware single step. --- gdb/aarch64-tdep.c | 133 ++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 130 insertions(+), 3 deletions(-) diff --git a/gdb/aarch64-tdep.c b/gdb/aarch64-tdep.c index bc928e14e9..1b50ec5fba 100644 --- a/gdb/aarch64-tdep.c +++ b/gdb/aarch64-tdep.c @@ -2489,11 +2489,12 @@ value_of_aarch64_user_reg (struct frame_info *frame, const void *baton) } -/* Implement the "software_single_step" gdbarch method, needed to - single step through atomic sequences on AArch64. */ +/* Checks for an atomic sequence of instructions. If such a sequence + is found, attempt to step through it. The end of the sequence address is + added to the next_pcs list. */ static std::vector -aarch64_software_single_step (struct regcache *regcache) +aarch64_deal_with_atomic_sequence (struct regcache *regcache) { struct gdbarch *gdbarch = regcache->arch (); enum bfd_endian byte_order_for_code = gdbarch_byte_order_for_code (gdbarch); @@ -2573,6 +2574,132 @@ aarch64_software_single_step (struct regcache *regcache) return next_pcs; } +/* Returns true if the condition evaluates to true. */ + +static int +condition_true (unsigned long cond, unsigned long cpsr) +{ + int result = 0; + + int pstate_n = bit (cpsr, 31); + int pstate_z = bit (cpsr, 30); + int pstate_c = bit (cpsr, 29); + int pstate_v = bit (cpsr, 28); + + switch ((cond >> 1) & 3) { + case 0: + result = pstate_z; + break; + case 1: + result = pstate_c; + break; + case 2: + result = pstate_n; + break; + case 3: + result = pstate_v; + break; + case 4: + result = (pstate_c == 1) && (pstate_z == 0); + break; + case 5: + result = (pstate_n == pstate_v); + break; + case 6: + result = (pstate_n == pstate_v) && (pstate_z == 0); + break; + case 7: + result = 1; + break; + } + + if ((cond & 1) == 1 && cond != 0xf) { + result = !result; + } + + return result; +} + +/* Implement the "software_single_step" gdbarch method. */ + +static std::vector +aarch64_software_single_step (struct regcache *regcache) +{ + struct gdbarch *gdbarch = regcache->arch (); + enum bfd_endian byte_order_for_code = gdbarch_byte_order_for_code (gdbarch); + const int insn_size = 4; + + CORE_ADDR pc = regcache_read_pc (regcache); + unsigned long status = regcache_raw_get_unsigned (regcache, AARCH64_CPSR_REGNUM); + unsigned long pc_val = (unsigned long) pc; + CORE_ADDR branch_addr = (CORE_ADDR) (pc_val + insn_size); /* Default case */ + + uint32_t insn = read_memory_unsigned_integer (pc, insn_size, + byte_order_for_code); + + std::vector next_pcs; + + next_pcs = aarch64_deal_with_atomic_sequence (regcache); + if (next_pcs.empty ()) { + aarch64_inst inst; + if (aarch64_decode_insn (insn, &inst, 1, NULL) != 0) + return {}; + + /* According to ISA_v82A_A64_xml_00bet3.1, in + AArch64 mode, the only things that can touch the + PC register are: + - the instructions decoded below + - AArch64.TakeReset + - AArch64.TakeException + - AArch64.ExceptionReturn (eret) + - ExitDebugState (drps) */ + + if (inst.opcode->iclass == condbranch) + { + /* b.cond */ + if (condition_true (inst.cond->value, status)) + branch_addr = pc + inst.operands[0].imm.value; + } + else if (inst.opcode->iclass == branch_imm) + { + /* b, bl */ + branch_addr = pc + inst.operands[0].imm.value; + } + else if (inst.opcode->iclass == branch_reg) + { + /* br, blr, ret */ + branch_addr = regcache_raw_get_unsigned (regcache, + inst.operands[0].reg.regno); + } + else if (inst.opcode->iclass == compbranch) + { + /* cbz, cbnz */ + ULONGEST reg = regcache_raw_get_unsigned (regcache, + inst.operands[0].reg.regno); + int op = bit (insn, 24); // cbz vs cbnz + + if (inst.operands[0].qualifier == AARCH64_OPND_QLF_W) // sf + reg &= 0xffffffff; + + if ((reg == 0) == (op == 0)) + branch_addr = pc + inst.operands[1].imm.value; + } + else if (inst.opcode->iclass == testbranch) + { + /* tbz, tbnz */ + ULONGEST reg = regcache_raw_get_unsigned (regcache, + inst.operands[0].reg.regno); + int bit_val = bit (insn, 24); // tbz vs tbnz + if (bit (reg, inst.operands[1].imm.value) == bit_val) + branch_addr = pc + inst.operands[2].imm.value; + } + + next_pcs.push_back (branch_addr); + } + + return next_pcs; +} + struct aarch64_displaced_step_closure : public displaced_step_closure { /* It is true when condition instruction, such as B.CON, TBZ, etc,