From patchwork Tue May 21 20:27:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Guinevere Larsen X-Patchwork-Id: 90637 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B7454384AB5B for ; Tue, 21 May 2024 20:29:28 +0000 (GMT) X-Original-To: gdb-patches@sourceware.org Delivered-To: gdb-patches@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTPS id 644FB3858C41 for ; Tue, 21 May 2024 20:28:44 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 644FB3858C41 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 644FB3858C41 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1716323327; cv=none; b=vl9YLDRZm/55KahMxK8XFq9Rmd5Q5RzoQrPv3O2826La7jvT4SW4kdiBAyWadM5BgNgcAwTy5g3pj50KAMQYJju1EA9zjFaVbb/A2YzwbkHIHVXlD1BRYDYVZTf9+Y1gpwCx19G4pgEyIQgzNS74h4UPMiyNHEsAlbIOR3NNUSg= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1716323327; c=relaxed/simple; bh=aBbfusvYxdmKanVc63n4Kqw8DY6n6061Uk7owGulCSo=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=wV3hE86WH6AG1BJfg3PffogVI181CpVuERs346HikBjmr0VrzjyVw8XDa0o+mz3vJfz+sZghFtQelsF6dVJB8VBiN8AIggEnMzijOHkLUAnY26yPSW4Ai9IYX75cp4Q4u6NO6TcjmkqQdwRpY6kcUhLLTmegV2BZpfM+zELhhdc= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1716323324; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=X6a/PCkN6KJ8jrAhbqaFbj/OD72bduM1VeFWo9hqmsM=; b=WqAORoNMM1kft6/mv92oKwoewLM8Xz4fbzYFWvoPlKYKOgeIRmovQPUibgzPP0SVLGXh8l 1tDGN4Tt9C7lkGHVTyspdaFEZI6y/qUPat77YXRprxCbZQkTNbxnh1qcmcKY688ryXLe01 OrAvER7z+Xur2ebzraDuSI5lbVqXtLg= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-564-d0uZ7lsYPdeq6S7MYBTdkA-1; Tue, 21 May 2024 16:28:42 -0400 X-MC-Unique: d0uZ7lsYPdeq6S7MYBTdkA-1 Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com [10.11.54.10]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 181BE3C025B6 for ; Tue, 21 May 2024 20:28:42 +0000 (UTC) Received: from fedora.redhat.com (unknown [10.96.134.84]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 370D5491032; Tue, 21 May 2024 20:28:41 +0000 (UTC) From: Guinevere Larsen To: gdb-patches@sourceware.org Cc: Guinevere Larsen Subject: [PATCH 2/3] gdb/record: add support to vmovd and vmovq instructions Date: Tue, 21 May 2024 17:27:59 -0300 Message-ID: <20240521202800.2865871-3-blarsen@redhat.com> In-Reply-To: <20240521202800.2865871-1-blarsen@redhat.com> References: <20240521202800.2865871-1-blarsen@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gdb-patches@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gdb-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gdb-patches-bounces+patchwork=sourceware.org@sourceware.org This commit adds support to the x86_64 AVX instructions vmovd and vmovq. The programmers manuals for Intel and AMD describe these 2 instructions as being almost the same, but my local testing, using gcc 13.2 on Fedora 39, showed several differences and inconsistencies. The instruction is supposed to always use the 3-byte VEX prefix, but I could only find 2-byte versions. The instructions aren't differentiated by the VEX.w bit, but by opcodes and VEX.pp. This patch adds a test with many different uses for both vmovd and vmovq. --- gdb/i386-tdep.c | 69 ++++++- gdb/testsuite/gdb.reverse/i386-avx-reverse.c | 90 +++++++++ .../gdb.reverse/i386-avx-reverse.exp | 171 ++++++++++++++++++ 3 files changed, 329 insertions(+), 1 deletion(-) create mode 100644 gdb/testsuite/gdb.reverse/i386-avx-reverse.c create mode 100644 gdb/testsuite/gdb.reverse/i386-avx-reverse.exp diff --git a/gdb/i386-tdep.c b/gdb/i386-tdep.c index 93a0926c4bc..d2848970ec4 100644 --- a/gdb/i386-tdep.c +++ b/gdb/i386-tdep.c @@ -4985,11 +4985,78 @@ static int i386_record_floats (struct gdbarch *gdbarch, with VEX prefix. */ static bool -i386_record_vex (struct i386_record_s *ir, uint8_t rex_w, uint8_t rex_r, +i386_record_vex (struct i386_record_s *ir, uint8_t vex_w, uint8_t vex_r, int opcode, struct gdbarch *gdbarch) { switch (opcode) { + case 0x6e: /* VMOVD XMM, reg/mem */ + /* This is moving from a regular register or memory region into an + XMM register. */ + i386_record_modrm (ir); + /* ModR/M only has the 3 least significant bits of the destination + register, the last one is indicated by VEX.R (stored inverted). */ + record_full_arch_list_add_reg (ir->regcache, + ir->regmap[X86_RECORD_XMM0_REGNUM] + + ir->reg + vex_r * 8); + break; + case 0x7e: /* VMOV(D/Q) */ + i386_record_modrm (ir); + /* Both the intel and AMD manual are wrong about this. According to + it, the only difference between vmovq and vmovd should be the rex_w + bit, but in empirical testing, it seems that they share this opcode, + and the way to differentiate it here is looking at VEX.PP. */ + if (ir->pp == 2) + { + /* This is vmovq moving from a regular register or memory + into an XMM register. As above, VEX.R is the final bit for + destination register. */ + record_full_arch_list_add_reg (ir->regcache, + ir->regmap[X86_RECORD_XMM0_REGNUM] + + ir->reg + vex_r * 8); + } + else if (ir->pp == 1) + { + /* This is the vmovd version that stores into a regular register + or memory region. */ + /* If ModRM.mod is 11 we are saving into a register. */ + if (ir->mod == 3) + record_full_arch_list_add_reg (ir->regcache, ir->regmap[ir->rm]); + else + { + /* Calculate the size of memory that will be modified + and store it in the form of 1 << ir->ot, since that + is how the function uses it. In theory, VEX.W is supposed + to indicate the size of the memory. In practice, I only + ever seen it set to 0, and for 16 bytes, 0xD6 opcode + is used. */ + if (vex_w) + ir->ot = 4; + else + ir->ot = 3; + + i386_record_lea_modrm (ir); + } + } + else + error ("Unrecognized VEX.PP value %d at address %s.", + ir->pp, paddress(gdbarch, ir->orig_addr)); + break; + case 0xd6: /* VMOVQ reg/mem XMM */ + i386_record_modrm (ir); + /* This is the vmovq version that stores into a regular register + or memory region. */ + /* If ModRM.mod is 11 we are saving into a register. */ + if (ir->mod == 3) + record_full_arch_list_add_reg (ir->regcache, ir->regmap[ir->rm]); + else + { + /* We know that this operation is always 64 bits. */ + ir->ot = 4; + i386_record_lea_modrm (ir); + } + break; + default: gdb_printf (gdb_stderr, _("Process record does not support VEX instruction 0x%02x " diff --git a/gdb/testsuite/gdb.reverse/i386-avx-reverse.c b/gdb/testsuite/gdb.reverse/i386-avx-reverse.c new file mode 100644 index 00000000000..216b593736b --- /dev/null +++ b/gdb/testsuite/gdb.reverse/i386-avx-reverse.c @@ -0,0 +1,90 @@ +/* This testcase is part of GDB, the GNU debugger. + + Copyright 2023 Free Software Foundation, Inc. + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ + +/* Architecture tests for intel i386 platform. */ + +#include + +char global_buf0[] = {0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, + 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f}; +char global_buf1[] = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; +char *dyn_buf0; +char *dyn_buf1; + +void +vmov_test () +{ + char buf0[] = {0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, + 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f}; + char buf1[] = {0, 0, 0, 0, 0, 0, 0, 0, + 0, 0, 0, 0, 0, 0, 0, 0}; + + /* Operations on registers. */ + asm volatile ("mov $0, %rcx"); + asm volatile ("mov $0xbeef, %rax"); + asm volatile ("vmovd %rax, %xmm0"); + asm volatile ("vmovd %xmm0, %rcx"); + + /* Operations based on local buffers. */ + asm volatile ("vmovd %0, %%xmm0": : "m"(buf0)); + asm volatile ("vmovd %%xmm0, %0": "=m"(buf1)); + asm volatile ("vmovq %0, %%xmm0": : "m"(buf0)); + asm volatile ("vmovq %%xmm0, %0": "=m"(buf1)); + + /* Operations based on global buffers. */ + asm volatile ("vmovd %0, %%xmm0": : "m"(global_buf0)); + asm volatile ("vmovd %%xmm0, %0": "=m"(global_buf1)); + asm volatile ("vmovq %0, %%xmm0": : "m"(global_buf0)); + asm volatile ("vmovq %%xmm0, %0": "=m"(global_buf1)); + + /* Operations based on dynamic buffers. */ + asm volatile ("vmovd %0, %%xmm0": : "m"(*dyn_buf0)); + asm volatile ("vmovd %%xmm0, %0": "=m"(*dyn_buf1)); + asm volatile ("vmovq %0, %%xmm0": : "m"(*dyn_buf0)); + asm volatile ("vmovq %%xmm0, %0": "=m"(*dyn_buf1)); + + /* Reset all relevant buffers. */ + asm volatile ("vmovq %%xmm15, %0": "=m" (buf1)); + asm volatile ("vmovq %%xmm15, %0": "=m" (global_buf1)); + asm volatile ("vmovq %%xmm15, %0": "=m" (*dyn_buf1)); + + /* Quick test for a different xmm register. */ + asm volatile ("vmovd %0, %%xmm15": "=m" (buf0)); + asm volatile ("vmovd %0, %%xmm15": "=m" (buf1)); + asm volatile ("vmovq %0, %%xmm15": "=m" (buf0)); + asm volatile ("vmovq %0, %%xmm15": "=m" (buf1)); +} /* end vmov_test */ + +int +main () +{ + dyn_buf0 = (char *) malloc(sizeof(char) * 16); + dyn_buf1 = (char *) malloc(sizeof(char) * 16); + for (int i =0; i < 16; i++) + { + dyn_buf0[i] = 0x20 + i; + dyn_buf1[i] = 0; + } + /* Zero relevant xmm registers, se we know what to look for. */ + asm volatile ("vmovq %0, %%xmm0": : "m" (global_buf1)); + asm volatile ("vmovq %0, %%xmm15": : "m" (global_buf1)); + + /* Start recording. */ + vmov_test (); + return 0; /* end of main */ +} diff --git a/gdb/testsuite/gdb.reverse/i386-avx-reverse.exp b/gdb/testsuite/gdb.reverse/i386-avx-reverse.exp new file mode 100644 index 00000000000..42ddc3a6526 --- /dev/null +++ b/gdb/testsuite/gdb.reverse/i386-avx-reverse.exp @@ -0,0 +1,171 @@ +# Copyright 2009-2023 Free Software Foundation, Inc. + +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 3 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +# This file is part of the gdb testsuite. + +# +# This test tests some i386 general instructions for reverse execution. +# + +require supports_reverse + +if {![istarget "*86*-*linux*"]} { + verbose "Skipping i386 reverse tests." + return +} + +standard_testfile + +# some targets have leading underscores on assembly symbols. +set additional_flags [gdb_target_symbol_prefix_flags] + +if {[prepare_for_testing "failed to prepare" $testfile $srcfile \ + [list debug $additional_flags]]} { + return -1 +} + +# Shorthand to test reversing through one instruction and +# testing if a register has the expected value. +# Prefix, if included, should end with a colon and space. + +proc test_one_register {insn register value {prefix ""}} { + gdb_test "reverse-step" "$insn.*" \ + "${prefix}reverse-step from $insn to test register $register" + + gdb_test "info register $register" \ + "$register.*uint128 = $value.*" \ + "${prefix}verify $register before $insn" +} + +# Shorthand to test reversing through one instruction and +# testing if a variable has the expected value. +# Prefix, if used, should end with a colon and space. + +proc test_one_memory {insn mem value {dynamic false} {prefix ""}} { + gdb_test "reverse-step" "$insn.*" \ + "${prefix}reverse-step from $insn to test memory $mem" + + # For the dynamic buffer, we have to cast and dereference the pointer + set cast "" + if {$dynamic == true} { + set cast {(char [16]) *} + } + + gdb_test "p/x $cast$mem" \ + ".*$value.*" \ + "${prefix}verify $mem before $insn" + +} + +# Record the execution for the whole function, and stop at its end +# to check if we can correctly reconstruct the state. +# In the source code, the function must be FUNCTION_test, and +# at the end, it must have a comment in the form: +# /* end FUNCTION_test */ +# Returns true if the full function could be recorded, false otherwise. +proc record_full_function {function} { + set end [gdb_get_line_number "end ${function}_test "] + gdb_breakpoint $end temporary + + if [supports_process_record] { + # Activate process record/replay. + gdb_test_no_output "record" "${function}: turn on process record" + } + + gdb_test_multiple "continue" "continue to end of ${function}_test" { + -re " end ${function}_test .*\r\n$::gdb_prompt $" { + pass $gdb_test_name + } + -re " Illegal instruction.*\r\n$::gdb_prompt $" { + fail $gdb_test_name + return false + } + -re "Process record does not support VEX instruction.*" { + fail $gdb_test_name + return false + } + } + return true +} + +set end_of_main [gdb_get_line_number " end of main "] +set rec_start [gdb_get_line_number " Start recording"] + +runto_main + +gdb_breakpoint $rec_start +gdb_continue_to_breakpoint "vmov_test" ".*vmov_test.*" + +global hex +global decimal + +# Record all the execution for vmov tests first. + +if {[record_full_function "vmov"] == true} { + # Now execute backwards, checking all instructions. + # First we test all instructions handling global buffers. + + test_one_register "vmovq" "xmm15" "0x3736353433323130" "reg_reset: " + test_one_register "vmovq" "xmm15" "0x0" + test_one_register "vmovd" "xmm15" "0x33323130" "reg_reset: " + test_one_register "vmovd" "xmm15" "0x0" + + with_test_prefix buffer_reset { + test_one_memory "vmovq" "dyn_buf1" \ + "0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, 0x0" true + test_one_memory "vmovq" "global_buf1" \ + "0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, 0x0" + test_one_memory "vmovq" "buf1" \ + "0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x0" + } + + with_test_prefix dynamic_buffers { + test_one_memory "vmovq" "dyn_buf1" "0x20, 0x21, 0x22, 0x23, 0x0" true + test_one_register "vmovq" "xmm0" "0x23222120" + test_one_memory "vmovd" "dyn_buf1" "0x0 .repeats 16 times" true + test_one_register "vmovd" "xmm0" "0x1716151413121110" + } + + with_test_prefix global_buffers { + test_one_memory "vmovq" "global_buf1" "0x10, 0x11, 0x12, 0x13, 0x0" + test_one_register "vmovq" "xmm0" "0x13121110" + test_one_memory "vmovd" "global_buf1" "0x0 .repeats 16 times" + test_one_register "vmovd" "xmm0" "0x3736353433323130" + } + + with_test_prefix local_buffers { + test_one_memory "vmovq" "buf1" "0x30, 0x31, 0x32, 0x33, 0x0" + test_one_register "vmovq" "xmm0" "0x33323130" + test_one_memory "vmovd" "buf1" "0x0 .repeats 16 times" + test_one_register "vmovd" "xmm0" "0xbeef" + } + + # regular registers don't have uint128 members, so do it manually + with_test_prefix registers { + gdb_test "reverse-step" "vmovd %xmm0, %rcx.*" \ + "reverse step to check rcx recording" + gdb_test "print/x \$rcx" "= 0x0" "rcx was recorded" + + test_one_register "vmovd" "xmm0" "0x0" + } +} else { + untested "could not record vmov_test" +} + +# Move to the end of vmov_test to set up next. +# Stop recording in case of recording errors. +gdb_test "record stop" "Process record is stopped.*" \ + "delete history for vmov_test" +gdb_test "finish" "Run till exit from.*vmov_test.*" "leaving vmov_test"