From patchwork Mon Jan 16 19:10:54 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Luis Machado X-Patchwork-Id: 18919 Received: (qmail 75303 invoked by alias); 16 Jan 2017 19:11:14 -0000 Mailing-List: contact gdb-patches-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: gdb-patches-owner@sourceware.org Delivered-To: mailing list gdb-patches@sourceware.org Received: (qmail 75275 invoked by uid 89); 16 Jan 2017 19:11:09 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.9 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_NONE, SPF_PASS, URIBL_RED autolearn=ham version=3.3.2 spammy=0d, insn-reverse.c, UD:insn-reverse.c, insnreversec X-HELO: relay1.mentorg.com Received: from relay1.mentorg.com (HELO relay1.mentorg.com) (192.94.38.131) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Mon, 16 Jan 2017 19:10:59 +0000 Received: from svr-orw-fem-06.mgc.mentorg.com ([147.34.97.120]) by relay1.mentorg.com with esmtp id 1cTCgD-0004tR-KH from Luis_Gustavo@mentor.com ; Mon, 16 Jan 2017 11:10:57 -0800 Received: from Opsys.world.mentorg.com (147.34.91.1) by SVR-ORW-FEM-06.mgc.mentorg.com (147.34.97.120) with Microsoft SMTP Server id 14.3.224.2; Mon, 16 Jan 2017 11:10:56 -0800 From: Luis Machado To: , Subject: [PATCH/RFC] Refactor gdb.reverse/insn-reverse.c Date: Mon, 16 Jan 2017 13:10:54 -0600 Message-ID: <1484593854-1791-1-git-send-email-lgustavo@codesourcery.com> MIME-Version: 1.0 X-IsSubscribed: yes This patch prepares things for an upcoming testcase for record/replay support on x86. As is, gdb.reverse/insn-reverse.c is divided into sections guarded by a few #if blocks, and right now it only handles arm/aarch64. If we move forward with requiring more tests for record/replay on different architectures, i think this has the potential to become cluttered with a lot of differing arch-specific code in the same file. I've briefly discussed this with Yao on IRC and here's what i came up with. I've broken up the main file into other files with arch-specific bits (insn-support-.c). The main file will hold the generic pieces that will take care of calling the tests. The arch-specific c files are then included at the top of the generic c file. I've also added a generic initialize function since we need to run pre-test checks on x86 to make sure the rdrand/rdseed instructions are supported, otherwise we will run into a SIGILL. The arch-specific files will implement their own initialize function with whatever makes sense. Right now the aarch64 and arm files have an empty initialization function. Does this look reasonable? gdb/testsuite/ChangeLog: 2017-01-16 Luis Machado * gdb.reverse/insn-reverse.c: Move arm and aarch64 code to their own files. (initialize): New function conditionally defined. (testcases): Move within conditional block. (main): Call initialize. * gdb.reverse/insn-support-aarch64.c: New file, based on aarch64 bits of gdb.reverse/insn-reverse.c. * gdb.reverse/insn-support-arm.c: New file, based on arm bits of gdb.reverse/insn-reverse.c. --- gdb/testsuite/gdb.reverse/insn-reverse.c | 144 +++-------------------- gdb/testsuite/gdb.reverse/insn-support-aarch64.c | 105 +++++++++++++++++ gdb/testsuite/gdb.reverse/insn-support-arm.c | 70 +++++++++++ 3 files changed, 194 insertions(+), 125 deletions(-) create mode 100644 gdb/testsuite/gdb.reverse/insn-support-aarch64.c create mode 100644 gdb/testsuite/gdb.reverse/insn-support-arm.c diff --git a/gdb/testsuite/gdb.reverse/insn-reverse.c b/gdb/testsuite/gdb.reverse/insn-reverse.c index f110fcf..d2d898f 100644 --- a/gdb/testsuite/gdb.reverse/insn-reverse.c +++ b/gdb/testsuite/gdb.reverse/insn-reverse.c @@ -15,141 +15,32 @@ You should have received a copy of the GNU General Public License along with this program. If not, see . */ -#if (defined __aarch64__) -#include -#endif - -#if (defined __aarch64__) -static void -load (void) -{ - int buf[8]; - - asm ("ld1 { v1.8b }, [%[buf]]\n" - "ld1 { v2.8b, v3.8b }, [%[buf]]\n" - "ld1 { v3.8b, v4.8b, v5.8b }, [%[buf]]\n" - : - : [buf] "r" (buf) - : /* No clobbers */); -} - -static void -move (void) -{ - float32x2_t b1_ = vdup_n_f32(123.0f); - float32_t a1_ = 0; - float64x1_t b2_ = vdup_n_f64(456.0f); - float64_t a2_ = 0; - - asm ("ins %0.s[0], %w1\n" - : "=w"(b1_) - : "r"(a1_), "0"(b1_) - : /* No clobbers */); - - asm ("ins %0.d[1], %x1\n" - : "=w"(b2_) - : "r"(a2_), "0"(b2_) - : /* No clobbers */); -} - -static void -adv_simd_mod_imm (void) -{ - float32x2_t a1 = {2.0, 4.0}; - - asm ("bic %0.2s, #1\n" - "bic %0.2s, #1, lsl #8\n" - : "=w"(a1) - : "0"(a1) - : /* No clobbers */); -} - -static void -adv_simd_scalar_index (void) -{ - float64x2_t b_ = {0.0, 0.0}; - float64_t a_ = 1.0; - float64_t result; +typedef void (*testcase_ftype) (void); - asm ("fmla %d0,%d1,%2.d[1]" - : "=w"(result) - : "w"(a_), "w"(b_) - : /* No clobbers */); -} +/* The arch-specific files need to implement both the initialize function + and define the testcases array. */ -static void -adv_simd_smlal (void) -{ - asm ("smlal v13.2d, v8.2s, v0.2s"); -} - -static void -adv_simd_vect_shift (void) -{ - asm ("fcvtzs s0, s0, #1"); -} +#if (defined __aarch64__) +#include "insn-support-aarch64.c" #elif (defined __arm__) -static void -ext_reg_load (void) -{ - char in[8]; - - asm ("vldr d0, [%0]" : : "r" (in)); - asm ("vldr s3, [%0]" : : "r" (in)); - - asm ("vldm %0, {d3-d4}" : : "r" (in)); - asm ("vldm %0, {s9-s11}" : : "r" (in)); -} - -static void -ext_reg_mov (void) -{ - int i, j; - double d; +#include "insn-support-arm.c" +#else +/* We get here if the current architecture being tested doesn't have any + record/replay instruction decoding tests implemented. */ +static testcase_ftype testcases[] = {}; - i = 1; - j = 2; - - asm ("vmov s4, s5, %0, %1" : "=r" (i), "=r" (j): ); - asm ("vmov s7, s8, %0, %1" : "=r" (i), "=r" (j): ); - asm ("vmov %0, %1, s10, s11" : : "r" (i), "r" (j)); - asm ("vmov %0, %1, s1, s2" : : "r" (i), "r" (j)); - - asm ("vmov %P2, %0, %1" : "=r" (i), "=r" (j): "w" (d)); - asm ("vmov %1, %2, %P0" : "=w" (d) : "r" (i), "r" (j)); -} +/* Dummy implementation in case this target doesn't have any record/replay + instruction decoding tests implemented. */ static void -ext_reg_push_pop (void) +initialize (void) { - double d; - - asm ("vpush {%P0}" : : "w" (d)); - asm ("vpop {%P0}" : : "w" (d)); } #endif -typedef void (*testcase_ftype) (void); - -/* Functions testing instruction decodings. GDB will read n_testcases - to know how many functions to test. */ - -static testcase_ftype testcases[] = -{ -#if (defined __aarch64__) - load, - move, - adv_simd_mod_imm, - adv_simd_scalar_index, - adv_simd_smlal, - adv_simd_vect_shift, -#elif (defined __arm__) - ext_reg_load, - ext_reg_mov, - ext_reg_push_pop, -#endif -}; - +/* GDB will read n_testcases to know how many functions to test. The + functions are implemented in arch-specific files and the testcases + array is defined together with them. */ static int n_testcases = (sizeof (testcases) / sizeof (testcase_ftype)); int @@ -157,6 +48,9 @@ main () { int i = 0; + /* Initialize any required arch-specific bits. */ + initialize (); + for (i = 0; i < n_testcases; i++) testcases[i] (); diff --git a/gdb/testsuite/gdb.reverse/insn-support-aarch64.c b/gdb/testsuite/gdb.reverse/insn-support-aarch64.c new file mode 100644 index 0000000..4cb83c4 --- /dev/null +++ b/gdb/testsuite/gdb.reverse/insn-support-aarch64.c @@ -0,0 +1,105 @@ +/* This testcase is part of GDB, the GNU debugger. + + Copyright 2015-2017 Free Software Foundation, Inc. + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ + +#include + +static void +load (void) +{ + int buf[8]; + + asm ("ld1 { v1.8b }, [%[buf]]\n" + "ld1 { v2.8b, v3.8b }, [%[buf]]\n" + "ld1 { v3.8b, v4.8b, v5.8b }, [%[buf]]\n" + : + : [buf] "r" (buf) + : /* No clobbers */); +} + +static void +move (void) +{ + float32x2_t b1_ = vdup_n_f32(123.0f); + float32_t a1_ = 0; + float64x1_t b2_ = vdup_n_f64(456.0f); + float64_t a2_ = 0; + + asm ("ins %0.s[0], %w1\n" + : "=w"(b1_) + : "r"(a1_), "0"(b1_) + : /* No clobbers */); + + asm ("ins %0.d[1], %x1\n" + : "=w"(b2_) + : "r"(a2_), "0"(b2_) + : /* No clobbers */); +} + +static void +adv_simd_mod_imm (void) +{ + float32x2_t a1 = {2.0, 4.0}; + + asm ("bic %0.2s, #1\n" + "bic %0.2s, #1, lsl #8\n" + : "=w"(a1) + : "0"(a1) + : /* No clobbers */); +} + +static void +adv_simd_scalar_index (void) +{ + float64x2_t b_ = {0.0, 0.0}; + float64_t a_ = 1.0; + float64_t result; + + asm ("fmla %d0,%d1,%2.d[1]" + : "=w"(result) + : "w"(a_), "w"(b_) + : /* No clobbers */); +} + +static void +adv_simd_smlal (void) +{ + asm ("smlal v13.2d, v8.2s, v0.2s"); +} + +static void +adv_simd_vect_shift (void) +{ + asm ("fcvtzs s0, s0, #1"); +} + +/* Initialize arch-specific bits. */ + +static void initialize (void) +{ + /* AArch64 doesn't currently use this function. */ +} + +/* Functions testing instruction decodings. GDB will test all of these. */ +static testcase_ftype testcases[] = +{ + load, + move, + adv_simd_mod_imm, + adv_simd_scalar_index, + adv_simd_smlal, + adv_simd_vect_shift, +}; diff --git a/gdb/testsuite/gdb.reverse/insn-support-arm.c b/gdb/testsuite/gdb.reverse/insn-support-arm.c new file mode 100644 index 0000000..cd721f6 --- /dev/null +++ b/gdb/testsuite/gdb.reverse/insn-support-arm.c @@ -0,0 +1,70 @@ +/* This testcase is part of GDB, the GNU debugger. + + Copyright 2015-2017 Free Software Foundation, Inc. + + This program is free software; you can redistribute it and/or modify + it under the terms of the GNU General Public License as published by + the Free Software Foundation; either version 3 of the License, or + (at your option) any later version. + + This program is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + GNU General Public License for more details. + + You should have received a copy of the GNU General Public License + along with this program. If not, see . */ + +static void +ext_reg_load (void) +{ + char in[8]; + + asm ("vldr d0, [%0]" : : "r" (in)); + asm ("vldr s3, [%0]" : : "r" (in)); + + asm ("vldm %0, {d3-d4}" : : "r" (in)); + asm ("vldm %0, {s9-s11}" : : "r" (in)); +} + +static void +ext_reg_mov (void) +{ + int i, j; + double d; + + i = 1; + j = 2; + + asm ("vmov s4, s5, %0, %1" : "=r" (i), "=r" (j): ); + asm ("vmov s7, s8, %0, %1" : "=r" (i), "=r" (j): ); + asm ("vmov %0, %1, s10, s11" : : "r" (i), "r" (j)); + asm ("vmov %0, %1, s1, s2" : : "r" (i), "r" (j)); + + asm ("vmov %P2, %0, %1" : "=r" (i), "=r" (j): "w" (d)); + asm ("vmov %1, %2, %P0" : "=w" (d) : "r" (i), "r" (j)); +} + +static void +ext_reg_push_pop (void) +{ + double d; + + asm ("vpush {%P0}" : : "w" (d)); + asm ("vpop {%P0}" : : "w" (d)); +} + +/* Initialize arch-specific bits. */ + +static void initialize (void) +{ + /* ARM doesn't currently use this function. */ +} + +/* Functions testing instruction decodings. GDB will test all of these. */ +static testcase_ftype testcases[] = +{ + ext_reg_load, + ext_reg_mov, + ext_reg_push_pop, +};