From patchwork Fri Feb 8 15:21:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Beulich X-Patchwork-Id: 31358 Received: (qmail 58658 invoked by alias); 8 Feb 2019 15:21:23 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Delivered-To: mailing list gdb-patches@sourceware.org Received: (qmail 58649 invoked by uid 89); 8 Feb 2019 15:21:22 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-11.9 required=5.0 tests=BAYES_00, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE autolearn=ham version=3.3.2 spammy=sans, prerequisite, determination X-HELO: prv1-mh.provo.novell.com Received: from prv1-mh.provo.novell.com (HELO prv1-mh.provo.novell.com) (137.65.248.33) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 08 Feb 2019 15:21:21 +0000 Received: from INET-PRV1-MTA by prv1-mh.provo.novell.com with Novell_GroupWise; Fri, 08 Feb 2019 08:21:19 -0700 Message-Id: <5C5D9E6F020000780021521F@prv1-mh.provo.novell.com> Date: Fri, 08 Feb 2019 08:21:19 -0700 From: "Jan Beulich" To: "GDB" Subject: [PATCH] x86-64: fix unused register determination for displaced stepping Mime-Version: 1.0 Content-Disposition: inline For one, just like %rdx is an implied source operand for DIV/IDIV, %rcx is one for shifts and rotates. Hence the register is better excluded altogether as well. And then VEX- and XOP-encoded GPR insns often have a 3rd operand (encoded in the VEX/XOP prefix) which needs to be accounted for as well. Then again %rbp was mistakenly recorded as used in the specific case of %rip-relative addressing we care about here. I'd like to note though that there's a certain risk associated with using %rbp as replacement base address register: Possible addressing related faults would then no longer surface as #GP(0), but #SS(0). But it doesn't look to have been the intention to avoid use of %rbp here. As a side note, amd64_get_unused_input_int_reg() does too much for the limited purpose it's getting used for anyway: It'll get called with %rip-relative memory operands only, so - there's always a ModR/M byte, - ModR/M.mod is always going to be 0, - ModR/M.rm is always going to be 5. It might be worthwhile to remove the dead code (perhaps replaced by assertions), at which point the comment getting changed here could be adjusted in a different way, and it would become recognizable again that we indeed can't run out of available (unused) registers. --- This takes "x86-64: fix displaced stepping for VEX, XOP, and EVEX encoded insns" as a prerequisite. As with the earlier fix, time constraints are the reason for this not being accompanied by a testsuite extension. gdb/ 2019-02-08 Jan Beulich * amd64-tdep.c (): . --- a/gdb/amd64-tdep.c +++ b/gdb/amd64-tdep.c @@ -1050,6 +1050,8 @@ struct amd64_insn int opcode_offset; /* The offset to the modrm byte or -1 if not present. */ int modrm_offset; + /* The VEX.vvvv encoded GPR or -1 if not present. */ + int vex_gpr; /* The raw instruction. */ gdb_byte *raw_insn; @@ -1209,7 +1211,7 @@ amd64_get_unused_input_int_reg (const st /* 1 bit for each reg */ int used_regs_mask = 0; - /* There can be at most 3 int regs used as inputs in an insn, and we have + /* There can be at most 4 int regs used as inputs in an insn, and we have 7 to choose from (RAX ... RDI, sans RSP). This allows us to take a conservative approach and keep things simple. E.g. By avoiding RAX, we don't have to specifically watch for opcodes @@ -1217,6 +1219,8 @@ amd64_get_unused_input_int_reg (const st /* Avoid RAX. */ used_regs_mask |= 1 << EAX_REG_NUM; + /* Similarily avoid RCX, implicit operand in shifts/rotates. */ + used_regs_mask |= 1 << ECX_REG_NUM; /* Similarily avoid RDX, implicit operand in divides. */ used_regs_mask |= 1 << EDX_REG_NUM; /* Avoid RSP. */ @@ -1246,12 +1250,15 @@ amd64_get_unused_input_int_reg (const st used_regs_mask |= 1 << base; used_regs_mask |= 1 << idx; } - else + else if (mod != 0 || rm != 5) { used_regs_mask |= 1 << rm; } } + if (details->vex_gpr >= 0 && details->vex_gpr < 8) + used_regs_mask |= 1 << details->vex_gpr; + gdb_assert (used_regs_mask < 256); gdb_assert (used_regs_mask != 255); @@ -1284,6 +1291,7 @@ amd64_get_insn_details (gdb_byte *insn, details->enc_prefix_offset = -1; details->opcode_offset = -1; details->modrm_offset = -1; + details->vex_gpr = -1; /* Skip legacy instruction prefixes. */ insn = amd64_skip_prefixes (insn); @@ -1305,11 +1313,17 @@ amd64_get_insn_details (gdb_byte *insn, { details->enc_prefix_offset = insn - start; need_modrm = (insn[1] & 0x1f) == 1 ? -1 : 1; + /* Record the 3rd (register) operand for VEX-encoded GPR insns. */ + if ( (insn[1] & 0x1f) > 1 && insn[3] >= 0xf0 ) + details->vex_gpr = (~insn[2] >> 3) & 0xf; insn += 3; } else if (xop_prefix_p (insn)) { details->enc_prefix_offset = insn - start; + /* Record the 3rd (register) operand for XOP-encoded GPR insns. */ + if ( (insn[1] & 0x1f) > 8 && insn[3] < 0x20 ) + details->vex_gpr = (~insn[2] >> 3) & 0xf; insn += 3; need_modrm = 1; }