From patchwork Fri Feb 24 05:51:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Christoph_M=C3=BCllner?= X-Patchwork-Id: 65571 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B63F8381A89D for ; Fri, 24 Feb 2023 05:53:48 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com [IPv6:2a00:1450:4864:20::432]) by sourceware.org (Postfix) with ESMTPS id DA3983850841 for ; Fri, 24 Feb 2023 05:51:49 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org DA3983850841 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=vrull.eu Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=vrull.eu Received: by mail-wr1-x432.google.com with SMTP id l1so12493960wry.10 for ; Thu, 23 Feb 2023 21:51:49 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vrull.eu; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=fwluFhQrUAshxS1yBZ9CzlCPaX39YDi02D7pQcnDpTA=; b=URKspYvlCqQdAkZczEQpDV/h/OvfYdNkfwaqh940EoWDhw1UVL9wnxvrdtUqnxlVtQ HLw/aE5K9Jx+ywO+CD5TMlst34WDSkSC16pUtQrPg69ojIU5Lh622MKEsKGsTUGitL5n vp+Tcld34y93L5vhxUaxReGSL2zFJSZV+lUAlVJW1tk2NmMIVgY7V4EVqDo9OkjrXesW uNNkXP623Q9xCkEIGqaubF2FaJu8G7R6anhAdWnRgu9X/J0122oYSYUcBj6J2k+OT4xI 83/5EdeTCy8sNilXWum6yG4Vc5IHuygw47gektuMy/H2gAP3iezxSdxsqhY0KSpdqTAK EbXQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=fwluFhQrUAshxS1yBZ9CzlCPaX39YDi02D7pQcnDpTA=; b=6e2jT+RLcJt5S8uk0GHhehsQzFY3K4sXouLWUL1f4dp3UqbbXs+4WWirtn3ZDkrKrn 6zzBr8irbFKOTRQJNN5TpPckKGU/lTqswZ2opJ17S+gJDV0bpAuLJUU+s66tNvVI90mC VyqMXNKicxbeMI2DD7rHZGDDfJCKXJUDF+SYDp9vguz7YiAwAQT+BzD6mRiuhwZ8VzWQ k4D53FzoZqGQSN+ltTnA2Cmqt0YLgfPxPqSltGqpAN+mHhb7V14GXUKHiYlbrTU2zk8X lRfn0OsllAZf26Uxsp7jHrYkv9zijvraTqGcShQPZys2UG7v5uEtBRc39EN1RVN+aAjA yx/g== X-Gm-Message-State: AO0yUKVyJiu52cg8XtWlUGo/Ff7zLN44m64+LZV43PVDU3/GL4YiU9fe OUkmG+z6XbUwp4Al28gAl85/37bNVRHGJgDd X-Google-Smtp-Source: AK7set953dk7CFOE6UwaJyEGpFPFWF54fh0hrr8g7d71LD/nZruBbFSzCj/iCfA496v/UlQL2WEcbg== X-Received: by 2002:adf:ce0b:0:b0:2c6:e827:21c1 with SMTP id p11-20020adfce0b000000b002c6e82721c1mr10993982wrn.50.1677217907795; Thu, 23 Feb 2023 21:51:47 -0800 (PST) Received: from beast.fritz.box (62-178-148-172.cable.dynamic.surfer.at. [62.178.148.172]) by smtp.gmail.com with ESMTPSA id f18-20020adffcd2000000b002c59e001631sm11704055wrs.77.2023.02.23.21.51.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 23 Feb 2023 21:51:47 -0800 (PST) From: Christoph Muellner To: gcc-patches@gcc.gnu.org, Kito Cheng , Jim Wilson , Palmer Dabbelt , Andrew Waterman , Philipp Tomsich , Jeff Law , Cooper Qu , Lifang Xia , Yunhai Shang , Zhiwei Liu Cc: "moiz.hussain" , =?utf-8?q?Christoph_M=C3=BCl?= =?utf-8?q?lner?= Subject: [PATCH v3 10/11] riscv: thead: Add support for the XTheadMemIdx ISA extension Date: Fri, 24 Feb 2023 06:51:26 +0100 Message-Id: <20230224055127.2500953-11-christoph.muellner@vrull.eu> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230224055127.2500953-1-christoph.muellner@vrull.eu> References: <20230224055127.2500953-1-christoph.muellner@vrull.eu> MIME-Version: 1.0 X-Spam-Status: No, score=-11.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, JMQ_SPF_NEUTRAL, KAM_MANYTO, KAM_SHORT, LIKELY_SPAM_BODY, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" From: "moiz.hussain" The XTheadMemIdx ISA extension provides a additional addressing modes to load and store instructions: * increment after * increment before * register indexed gcc/ChangeLog: * config/riscv/constraints.md (Qmb): New constraint. (Qma): Likewise. (Qmr): Likewise. (Qmu): Likewise. * config/riscv/riscv-opts.h (HAVE_POST_MODIFY_DISP): New macro. (HAVE_PRE_MODIFY_DISP): Likewise. * config/riscv/riscv-protos.h (riscv_classify_address_index): New prototype. (riscv_classify_address_modify): Likewise. (riscv_output_move_index): Likewise. (riscv_output_move_modify): Likewise. (riscv_legitimize_address_index_p): Likewise. (riscv_legitimize_address_modify_p): Likewise. * config/riscv/riscv.cc (enum riscv_address_type): Add new addressing modes. (struct riscv_address_info): New field 'shift'. (riscv_classify_address): Add support for XTheadMemIdx. (riscv_classify_address_index): New function. (riscv_classify_address_modify): New function. (AM_IMM): New helper macro. (AM_OFFSET): New helper macro. (riscv_legitimize_address_modify_p): New function. (riscv_output_move_modify): New function. (riscv_legitimize_address_index_p): New function. (riscv_output_move_index): New function. (riscv_legitimize_address): Add support for XTheadMemIdx. (riscv_rtx_costs): Adjust for XTheadMemIdx. (riscv_output_move): Generalize to support XTheadMemIdx. (riscv_print_operand_address): Add support for XTheadMemIdx. * config/riscv/riscv.h (INDEX_REG_CLASS): Adjust for XTheadMemIdx. (REGNO_OK_FOR_INDEX_P): Adjust for XTheadMemIdx. * config/riscv/riscv.md (*zero_extendhi2): Adjust pattern for XTheadMemIdx. (*zero_extendhi2_internal): Likewise. gcc/testsuite/ChangeLog: * gcc.target/riscv/xtheadmemidx-ldi-sdi.c: New test. * gcc.target/riscv/xtheadmemidx-ldr-str-32.c: New test. * gcc.target/riscv/xtheadmemidx-ldr-str-64.c: New test. * gcc.target/riscv/xtheadmemidx-macros.h: New test. Signed-off-by: M. Moiz Hussain Signed-off-by: Christoph Müllner --- gcc/config/riscv/constraints.md | 28 ++ gcc/config/riscv/riscv-opts.h | 3 + gcc/config/riscv/riscv-protos.h | 18 + gcc/config/riscv/riscv.cc | 438 ++++++++++++++++-- gcc/config/riscv/riscv.h | 8 +- gcc/config/riscv/riscv.md | 78 +++- .../gcc.target/riscv/xtheadmemidx-ldi-sdi.c | 72 +++ .../riscv/xtheadmemidx-ldr-str-32.c | 23 + .../riscv/xtheadmemidx-ldr-str-64.c | 53 +++ .../gcc.target/riscv/xtheadmemidx-macros.h | 110 +++++ 10 files changed, 772 insertions(+), 59 deletions(-) create mode 100644 gcc/testsuite/gcc.target/riscv/xtheadmemidx-ldi-sdi.c create mode 100644 gcc/testsuite/gcc.target/riscv/xtheadmemidx-ldr-str-32.c create mode 100644 gcc/testsuite/gcc.target/riscv/xtheadmemidx-ldr-str-64.c create mode 100644 gcc/testsuite/gcc.target/riscv/xtheadmemidx-macros.h diff --git a/gcc/config/riscv/constraints.md b/gcc/config/riscv/constraints.md index e49019d8fa9..a007cf0b4f5 100644 --- a/gcc/config/riscv/constraints.md +++ b/gcc/config/riscv/constraints.md @@ -174,3 +174,31 @@ (define_register_constraint "th_f_fmv" "TARGET_XTHEADFMV ? FP_REGS : NO_REGS" (define_register_constraint "th_r_fmv" "TARGET_XTHEADFMV ? GR_REGS : NO_REGS" "An integer register for XTheadFmv.") + +(define_memory_constraint "Qmb" + "@internal + An address valid for LDIB/LDIA and STIB/STIA instructions." + (and (match_code "mem") + (match_test "riscv_legitimize_address_modify_p ( + XEXP (op, 0), GET_MODE (op), false)"))) + +(define_memory_constraint "Qma" + "@internal + An address valid for LDIA and STIA instructions." + (and (match_code "mem") + (match_test "riscv_legitimize_address_modify_p ( + XEXP (op, 0), GET_MODE (op), true)"))) + +(define_memory_constraint "Qmr" + "@internal + An address valid for LDR and STR instructions." + (and (match_code "mem") + (match_test "riscv_legitimize_address_index_p ( + XEXP (op, 0), GET_MODE (op), false)"))) + +(define_memory_constraint "Qmu" + "@internal + An address valid for LDUR and STUR instructions." + (and (match_code "mem") + (match_test "riscv_legitimize_address_index_p ( + XEXP (op, 0), GET_MODE (op), true)"))) diff --git a/gcc/config/riscv/riscv-opts.h b/gcc/config/riscv/riscv-opts.h index cf0cd669be4..5cd3f7673f0 100644 --- a/gcc/config/riscv/riscv-opts.h +++ b/gcc/config/riscv/riscv-opts.h @@ -215,4 +215,7 @@ enum stack_protector_guard { #define TARGET_XTHEADMEMPAIR ((riscv_xthead_subext & MASK_XTHEADMEMPAIR) != 0) #define TARGET_XTHEADSYNC ((riscv_xthead_subext & MASK_XTHEADSYNC) != 0) +#define HAVE_POST_MODIFY_DISP TARGET_XTHEADMEMIDX +#define HAVE_PRE_MODIFY_DISP TARGET_XTHEADMEMIDX + #endif /* ! GCC_RISCV_OPTS_H */ diff --git a/gcc/config/riscv/riscv-protos.h b/gcc/config/riscv/riscv-protos.h index 1b7ba02726d..019a0e08285 100644 --- a/gcc/config/riscv/riscv-protos.h +++ b/gcc/config/riscv/riscv-protos.h @@ -65,6 +65,24 @@ extern void riscv_expand_int_scc (rtx, enum rtx_code, rtx, rtx); extern void riscv_expand_float_scc (rtx, enum rtx_code, rtx, rtx); extern void riscv_expand_conditional_branch (rtx, enum rtx_code, rtx, rtx); #endif + +extern bool +riscv_classify_address_index (struct riscv_address_info *info, rtx x, + machine_mode mode, bool strict_p); +extern bool +riscv_classify_address_modify (struct riscv_address_info *info, rtx x, + machine_mode mode, bool strict_p); + +extern const char * +riscv_output_move_index (rtx x, machine_mode mode, bool ldr); +extern const char * +riscv_output_move_modify (rtx x, machine_mode mode, bool ldi); + +extern bool +riscv_legitimize_address_index_p (rtx x, machine_mode mode, bool uindex); +extern bool +riscv_legitimize_address_modify_p (rtx x, machine_mode mode, bool post); + extern bool riscv_expand_conditional_move (rtx, rtx, rtx, rtx); extern rtx riscv_legitimize_call_address (rtx); extern void riscv_set_return_address (rtx, rtx); diff --git a/gcc/config/riscv/riscv.cc b/gcc/config/riscv/riscv.cc index 33854393bd2..2980dbd69f9 100644 --- a/gcc/config/riscv/riscv.cc +++ b/gcc/config/riscv/riscv.cc @@ -83,6 +83,19 @@ along with GCC; see the file COPYING3. If not see /* Classifies an address. + ADDRESS_REG_REG + A base register indexed by (optionally scaled) register. + + ADDRESS_REG_UREG + A base register indexed by (optionally scaled) zero-extended register. + + ADDRESS_REG_WB + A base register indexed by immediate offset with writeback. + + ADDRESS_REG + A natural register + offset address. The register satisfies + riscv_valid_base_register_p and the offset is a const_arith_operand. + ADDRESS_REG A natural register + offset address. The register satisfies riscv_valid_base_register_p and the offset is a const_arith_operand. @@ -97,6 +110,9 @@ along with GCC; see the file COPYING3. If not see ADDRESS_SYMBOLIC: A constant symbolic address. */ enum riscv_address_type { + ADDRESS_REG_REG, + ADDRESS_REG_UREG, + ADDRESS_REG_WB, ADDRESS_REG, ADDRESS_LO_SUM, ADDRESS_CONST_INT, @@ -201,6 +217,7 @@ struct riscv_address_info { rtx reg; rtx offset; enum riscv_symbol_type symbol_type; + int shift; }; /* One stage in a constant building sequence. These sequences have @@ -1025,12 +1042,31 @@ riscv_classify_address (struct riscv_address_info *info, rtx x, if (riscv_v_ext_vector_mode_p (mode)) return false; + if (riscv_valid_base_register_p (XEXP (x, 0), mode, strict_p) + && riscv_classify_address_index (info, XEXP (x, 1), mode, strict_p)) + { + info->reg = XEXP (x, 0); + return true; + } + else if (riscv_valid_base_register_p (XEXP (x, 1), mode, strict_p) + && riscv_classify_address_index (info, XEXP (x, 0), + mode, strict_p)) + { + info->reg = XEXP (x, 1); + return true; + } + info->type = ADDRESS_REG; info->reg = XEXP (x, 0); info->offset = XEXP (x, 1); return (riscv_valid_base_register_p (info->reg, mode, strict_p) && riscv_valid_offset_p (info->offset, mode)); + case POST_MODIFY: + case PRE_MODIFY: + + return riscv_classify_address_modify (info, x, mode, strict_p); + case LO_SUM: /* RVV load/store disallow LO_SUM. */ if (riscv_v_ext_vector_mode_p (mode)) @@ -1269,6 +1305,263 @@ riscv_emit_move (rtx dest, rtx src) : emit_move_insn_1 (dest, src)); } +/* Return true if address offset is a valid index. If it is, fill in INFO + appropriately. STRICT_P is true if REG_OK_STRICT is in effect. */ + +bool +riscv_classify_address_index (struct riscv_address_info *info, rtx x, + machine_mode mode, bool strict_p) +{ + enum riscv_address_type type = ADDRESS_REG_REG;; + rtx index; + int shift = 0; + + if (!TARGET_XTHEADMEMIDX) + return false; + + if (!TARGET_64BIT && mode == DImode) + return false; + + if (SCALAR_FLOAT_MODE_P (mode)) + { + if (!TARGET_HARD_FLOAT) + return false; + if (GET_MODE_SIZE (mode).to_constant () == 2) + return false; + } + + /* (reg:P) */ + if ((REG_P (x) || GET_CODE (x) == SUBREG) + && GET_MODE (x) == Pmode) + { + index = x; + shift = 0; + } + /* (zero_extend:DI (reg:SI)) */ + else if (GET_CODE (x) == ZERO_EXTEND + && GET_MODE (x) == DImode + && GET_MODE (XEXP (x, 0)) == SImode) + { + type = ADDRESS_REG_UREG; + index = XEXP (x, 0); + shift = 0; + } + /* (mult:DI (zero_extend:DI (reg:SI)) (const_int scale)) */ + else if (GET_CODE (x) == MULT + && GET_CODE (XEXP (x, 0)) == ZERO_EXTEND + && GET_MODE (XEXP (x, 0)) == DImode + && GET_MODE (XEXP (XEXP (x, 0), 0)) == SImode + && CONST_INT_P (XEXP (x, 1))) + { + type = ADDRESS_REG_UREG; + index = XEXP (XEXP (x, 0), 0); + shift = exact_log2 (INTVAL (XEXP (x, 1))); + } + /* (ashift:DI (zero_extend:DI (reg:SI)) (const_int shift)) */ + else if (GET_CODE (x) == ASHIFT + && GET_CODE (XEXP (x, 0)) == ZERO_EXTEND + && GET_MODE (XEXP (x, 0)) == DImode + && GET_MODE (XEXP (XEXP (x, 0), 0)) == SImode + && CONST_INT_P (XEXP (x, 1))) + { + type = ADDRESS_REG_UREG; + index = XEXP (XEXP (x, 0), 0); + shift = INTVAL (XEXP (x, 1)); + } + /* (mult:P (reg:P) (const_int scale)) */ + else if (GET_CODE (x) == MULT + && GET_MODE (x) == Pmode + && GET_MODE (XEXP (x, 0)) == Pmode + && CONST_INT_P (XEXP (x, 1))) + { + index = XEXP (x, 0); + shift = exact_log2 (INTVAL (XEXP (x, 1))); + } + /* (ashift:P (reg:P) (const_int shift)) */ + else if (GET_CODE (x) == ASHIFT + && GET_MODE (x) == Pmode + && GET_MODE (XEXP (x, 0)) == Pmode + && CONST_INT_P (XEXP (x, 1))) + { + index = XEXP (x, 0); + shift = INTVAL (XEXP (x, 1)); + } + else + return false; + + if (shift != 0 && !IN_RANGE (shift, 1, 3)) + return false; + + if (!strict_p + && GET_CODE (index) == SUBREG + && contains_reg_of_mode[GENERAL_REGS][GET_MODE (SUBREG_REG (index))]) + index = SUBREG_REG (index); + + if (riscv_valid_base_register_p (index, mode, strict_p)) + { + info->type = type; + info->offset = index; + info->shift = shift; + return true; + } + return false; +} + +/* Return true if address is a valid modify. If it is, fill in INFO + appropriately. STRICT_P is true if REG_OK_STRICT is in effect. */ + +bool +riscv_classify_address_modify (struct riscv_address_info *info, rtx x, + machine_mode mode, bool strict_p) +{ + +#define AM_IMM(BIT) (1LL << (5 + (BIT))) +#define AM_OFFSET(VALUE, SHIFT) (\ + ((unsigned HOST_WIDE_INT) (VALUE) + AM_IMM (SHIFT)/2 < AM_IMM (SHIFT)) \ + && !((unsigned HOST_WIDE_INT) (VALUE) & ((1 << (SHIFT)) - 1)) \ + ? (SHIFT) + 1 \ + : 0) + + if (!TARGET_XTHEADMEMIDX) + return false; + + if (!(INTEGRAL_MODE_P (mode) && GET_MODE_SIZE (mode).to_constant () <= 8)) + return false; + + if (!TARGET_64BIT && mode == DImode) + return false; + + if (GET_CODE (x) != POST_MODIFY + && GET_CODE (x) != PRE_MODIFY) + return false; + + info->type = ADDRESS_REG_WB; + info->reg = XEXP (x, 0); + + if (GET_CODE (XEXP (x, 1)) == PLUS + && CONST_INT_P (XEXP (XEXP (x, 1), 1)) + && rtx_equal_p (XEXP (XEXP (x, 1), 0), info->reg) + && riscv_valid_base_register_p (info->reg, mode, strict_p)) + { + info->offset = XEXP (XEXP (x, 1), 1); + int shift = AM_OFFSET (INTVAL (info->offset), 0); + if (!shift) + shift = AM_OFFSET (INTVAL (info->offset), 1); + if (!shift) + shift = AM_OFFSET (INTVAL (info->offset), 2); + if (!shift) + shift = AM_OFFSET (INTVAL (info->offset), 3); + if (shift) + { + info->shift = shift - 1; + return true; + } + } + return false; +} + +/* Return TRUE if X is a legitimate address modify. */ + +bool +riscv_legitimize_address_modify_p (rtx x, machine_mode mode, bool post) +{ + struct riscv_address_info addr; + return riscv_classify_address_modify (&addr, x, mode, false) + && (!post || GET_CODE (x) == POST_MODIFY); +} + +/* Return the LDIB/LDIA and STIB/STIA instructions. Assume + that X is MEM operand. */ + +const char * +riscv_output_move_modify (rtx x, machine_mode mode, bool ldi) +{ + static char buf[128] = {0}; + + int index = exact_log2 (GET_MODE_SIZE (mode).to_constant ()); + if (!IN_RANGE (index, 0, 3)) + return NULL; + + if (!riscv_legitimize_address_modify_p (x, mode, false)) + return NULL; + + bool post = riscv_legitimize_address_modify_p (x, mode, true); + + const char *const insn[][4] = { + { + "th.sbi%s\t%%z1,%%0", + "th.shi%s\t%%z1,%%0", + "th.swi%s\t%%z1,%%0", + "th.sdi%s\t%%z1,%%0" + }, + { + "th.lbui%s\t%%0,%%1", + "th.lhui%s\t%%0,%%1", + "th.lwi%s\t%%0,%%1", + "th.ldi%s\t%%0,%%1" + } + }; + + snprintf (buf, sizeof (buf), insn[ldi][index], post ? "a" : "b"); + return buf; +} + +bool +riscv_legitimize_address_index_p (rtx x, machine_mode mode, bool uindex) +{ + struct riscv_address_info addr; + rtx op0, op1; + + if (GET_CODE (x) != PLUS) + return false; + + op0 = XEXP (x, 0); + op1 = XEXP (x, 1); + + return ((riscv_valid_base_register_p (op0, mode, false) + && riscv_classify_address_index (&addr, op1, mode, false)) + || (riscv_valid_base_register_p (op1, mode, false) + && riscv_classify_address_index (&addr, op0, mode, false))) + && (!uindex || addr.type == ADDRESS_REG_UREG); +} + +/* Return the LDR or STR instructions. Assume + that X is MEM operand. */ + +const char * +riscv_output_move_index (rtx x, machine_mode mode, bool ldr) +{ + static char buf[128] = {0}; + + int index = exact_log2 (GET_MODE_SIZE (mode).to_constant ()); + if (!IN_RANGE (index, 0, 3)) + return NULL; + + if (!riscv_legitimize_address_index_p (x, mode, false)) + return NULL; + + bool uindex = riscv_legitimize_address_index_p (x, mode, true); + + const char *const insn[][4] = { + { + "th.s%srb\t%%z1,%%0", + "th.s%srh\t%%z1,%%0", + "th.s%srw\t%%z1,%%0", + "th.s%srd\t%%z1,%%0" + }, + { + "th.l%srbu\t%%0,%%1", + "th.l%srhu\t%%0,%%1", + "th.l%srw\t%%0,%%1", + "th.l%srd\t%%0,%%1" + } + }; + + snprintf (buf, sizeof (buf), insn[ldr][index], uindex ? "u" : ""); + + return buf; +} + /* Emit an instruction of the form (set TARGET SRC). */ static rtx @@ -1631,6 +1924,42 @@ riscv_legitimize_address (rtx x, rtx oldx ATTRIBUTE_UNUSED, if (riscv_split_symbol (NULL, x, mode, &addr, FALSE)) return riscv_force_address (addr, mode); + /* Optimize BASE + OFFSET into BASE + INDEX. */ + if (TARGET_XTHEADMEMIDX + && GET_CODE (x) == PLUS && CONST_INT_P (XEXP (x, 1)) + && INTVAL (XEXP (x, 1)) != 0 + && GET_CODE (XEXP (x, 0)) == PLUS) + { + rtx base = XEXP (x, 0); + rtx offset_rtx = XEXP (x, 1); + + rtx op0 = XEXP (base, 0); + rtx op1 = XEXP (base, 1); + /* Force any scaling into a temp for CSE. */ + op0 = force_reg (Pmode, op0); + op1 = force_reg (Pmode, op1); + + /* Let the pointer register be in op0. */ + if (REG_POINTER (op1)) + std::swap (op0, op1); + + unsigned regno = REGNO (op0); + + /* If the pointer is virtual or frame related, then we know that + virtual register instantiation or register elimination is going + to apply a second constant. We want the two constants folded + together easily. Therefore, emit as (OP0 + CONST) + OP1. */ + if ((regno >= FIRST_VIRTUAL_REGISTER + && regno <= LAST_VIRTUAL_POINTER_REGISTER) + || regno == FRAME_POINTER_REGNUM + || regno == ARG_POINTER_REGNUM) + { + base = expand_binop (Pmode, add_optab, op0, offset_rtx, + NULL_RTX, true, OPTAB_DIRECT); + return gen_rtx_PLUS (Pmode, base, op1); + } + } + /* Handle BASE + OFFSET. */ if (GET_CODE (x) == PLUS && CONST_INT_P (XEXP (x, 1)) && INTVAL (XEXP (x, 1)) != 0) @@ -2408,6 +2737,13 @@ riscv_rtx_costs (rtx x, machine_mode mode, int outer_code, int opno ATTRIBUTE_UN *total = COSTS_N_INSNS (SINGLE_SHIFT_COST); return true; } + /* bit extraction pattern (xtheadmemidx, xtheadfmemidx). */ + if (outer_code == SET + && TARGET_XTHEADMEMIDX) + { + *total = COSTS_N_INSNS (SINGLE_SHIFT_COST); + return true; + } gcc_fallthrough (); case SIGN_EXTRACT: if (TARGET_XTHEADBB && outer_code == SET @@ -2826,13 +3162,23 @@ riscv_output_move (rtx dest, rtx src) } if (src_code == MEM) - switch (width) - { - case 1: return "lbu\t%0,%1"; - case 2: return "lhu\t%0,%1"; - case 4: return "lw\t%0,%1"; - case 8: return "ld\t%0,%1"; - } + { + const char *insn = NULL; + insn = riscv_output_move_index (XEXP (src, 0), GET_MODE (src), true); + if (!insn) + insn = riscv_output_move_modify (XEXP (src, 0), + GET_MODE (src), true); + if (insn) + return insn; + + switch (width) + { + case 1: return "lbu\t%0,%1"; + case 2: return "lhu\t%0,%1"; + case 4: return "lw\t%0,%1"; + case 8: return "ld\t%0,%1"; + } + } if (src_code == CONST_INT) { @@ -2887,13 +3233,24 @@ riscv_output_move (rtx dest, rtx src) } } if (dest_code == MEM) - switch (width) - { - case 1: return "sb\t%z1,%0"; - case 2: return "sh\t%z1,%0"; - case 4: return "sw\t%z1,%0"; - case 8: return "sd\t%z1,%0"; - } + { + const char *insn = NULL; + insn = riscv_output_move_index (XEXP (dest, 0), + GET_MODE (dest), false); + if (!insn) + insn = riscv_output_move_modify (XEXP (dest, 0), + GET_MODE (dest), false); + if (insn) + return insn; + + switch (width) + { + case 1: return "sb\t%z1,%0"; + case 2: return "sh\t%z1,%0"; + case 4: return "sw\t%z1,%0"; + case 8: return "sd\t%z1,%0"; + } + } } if (src_code == REG && FP_REG_P (REGNO (src))) { @@ -2911,28 +3268,32 @@ riscv_output_move (rtx dest, rtx src) } if (dest_code == MEM) - switch (width) - { - case 2: - return "fsh\t%1,%0"; - case 4: - return "fsw\t%1,%0"; - case 8: - return "fsd\t%1,%0"; - } + { + switch (width) + { + case 2: + return "fsh\t%1,%0"; + case 4: + return "fsw\t%1,%0"; + case 8: + return "fsd\t%1,%0"; + } + } } if (dest_code == REG && FP_REG_P (REGNO (dest))) { if (src_code == MEM) - switch (width) - { - case 2: - return "flh\t%0,%1"; - case 4: - return "flw\t%0,%1"; - case 8: - return "fld\t%0,%1"; - } + { + switch (width) + { + case 2: + return "flh\t%0,%1"; + case 4: + return "flw\t%0,%1"; + case 8: + return "fld\t%0,%1"; + } + } } if (dest_code == REG && GP_REG_P (REGNO (dest)) && src_code == CONST_POLY_INT) { @@ -4881,6 +5242,19 @@ riscv_print_operand_address (FILE *file, machine_mode mode ATTRIBUTE_UNUSED, rtx case ADDRESS_SYMBOLIC: output_addr_const (file, riscv_strip_unspec_address (x)); return; + + case ADDRESS_REG_REG: + case ADDRESS_REG_UREG: + fprintf (file, "%s,%s,%u", reg_names[REGNO (addr.reg)], + reg_names[REGNO (addr.offset)], + addr.shift); + return; + + case ADDRESS_REG_WB: + fprintf (file, "(%s),%ld,%u", reg_names[REGNO (addr.reg)], + (long) INTVAL (addr.offset) >> addr.shift, + addr.shift); + return; } gcc_unreachable (); } diff --git a/gcc/config/riscv/riscv.h b/gcc/config/riscv/riscv.h index 5bc7f2f467d..199bb30162e 100644 --- a/gcc/config/riscv/riscv.h +++ b/gcc/config/riscv/riscv.h @@ -535,7 +535,8 @@ enum reg_class factor or added to another register (as well as added to a displacement). */ -#define INDEX_REG_CLASS NO_REGS +#define INDEX_REG_CLASS ((TARGET_XTHEADMEMIDX) ? \ + GR_REGS : NO_REGS) /* We generally want to put call-clobbered registers ahead of call-saved ones. (IRA expects this.) */ @@ -705,7 +706,10 @@ typedef struct { /* Addressing modes, and classification of registers for them. */ -#define REGNO_OK_FOR_INDEX_P(REGNO) 0 +#define REGNO_OK_FOR_INDEX_P(REGNO) \ + ((TARGET_XTHEADMEMIDX) ? \ + riscv_regno_mode_ok_for_base_p (REGNO, VOIDmode, 1) : 0) + #define REGNO_MODE_OK_FOR_BASE_P(REGNO, MODE) \ riscv_regno_mode_ok_for_base_p (REGNO, MODE, 1) diff --git a/gcc/config/riscv/riscv.md b/gcc/config/riscv/riscv.md index 61f175bb62b..df31a1fffff 100644 --- a/gcc/config/riscv/riscv.md +++ b/gcc/config/riscv/riscv.md @@ -1360,12 +1360,17 @@ (define_expand "zero_extendsidi2" "TARGET_64BIT") (define_insn_and_split "*zero_extendsidi2_internal" - [(set (match_operand:DI 0 "register_operand" "=r,r") + [(set (match_operand:DI 0 "register_operand" "=r,r,r,r,r,r") (zero_extend:DI - (match_operand:SI 1 "nonimmediate_operand" " r,m")))] - "TARGET_64BIT && !TARGET_ZBA" + (match_operand:SI 1 + "nonimmediate_operand" " r,Qmu,Qmr,Qma,Qmb,m")))] + "TARGET_64BIT && !(TARGET_ZBA || TARGET_ZBB)" "@ # + th.lurwu\t%0,%1 + th.lrwu\t%0,%1 + th.lwuia\t%0,%1 + th.lwuib\t%0,%1 lwu\t%0,%1" "&& reload_completed && REG_P (operands[1]) @@ -1375,7 +1380,7 @@ (define_insn_and_split "*zero_extendsidi2_internal" (set (match_dup 0) (lshiftrt:DI (match_dup 0) (const_int 32)))] { operands[1] = gen_lowpart (DImode, operands[1]); } - [(set_attr "move_type" "shift_shift,load") + [(set_attr "move_type" "shift_shift,load,load,load,load,load") (set_attr "mode" "DI")]) (define_expand "zero_extendhi2" @@ -1384,13 +1389,18 @@ (define_expand "zero_extendhi2" (match_operand:HI 1 "nonimmediate_operand")))] "") -(define_insn_and_split "*zero_extendhi2" - [(set (match_operand:GPR 0 "register_operand" "=r,r") +(define_insn_and_split "*zero_extendhi2_internal" + [(set (match_operand:GPR 0 "register_operand" "=r,r,r,r,r,r") (zero_extend:GPR - (match_operand:HI 1 "nonimmediate_operand" " r,m")))] - "!TARGET_ZBB" + (match_operand:HI 1 + "nonimmediate_operand" " r,Qmu,Qmr,Qma,Qmb,m")))] + "!(TARGET_ZBA || TARGET_ZBB)" "@ # + th.lurhu\t%0,%1 + th.lrhu\t%0,%1 + th.lhuia\t%0,%1 + th.lhuib\t%0,%1 lhu\t%0,%1" "&& reload_completed && REG_P (operands[1]) @@ -1401,20 +1411,25 @@ (define_insn_and_split "*zero_extendhi2" (lshiftrt:GPR (match_dup 0) (match_dup 2)))] { operands[1] = gen_lowpart (mode, operands[1]); - operands[2] = GEN_INT(GET_MODE_BITSIZE(mode) - 16); + operands[2] = GEN_INT (GET_MODE_BITSIZE (mode) - 16); } - [(set_attr "move_type" "shift_shift,load") + [(set_attr "move_type" "shift_shift,load,load,load,load,load") (set_attr "mode" "")]) (define_insn "zero_extendqi2" - [(set (match_operand:SUPERQI 0 "register_operand" "=r,r") + [(set (match_operand:SUPERQI 0 "register_operand" "=r,r,r,r,r,r") (zero_extend:SUPERQI - (match_operand:QI 1 "nonimmediate_operand" " r,m")))] + (match_operand:QI 1 + "nonimmediate_operand" " r,Qmu,Qmr,Qma,Qmb,m")))] "" "@ andi\t%0,%1,0xff + th.lurbu\t%0,%1 + th.lrbu\t%0,%1 + th.lbuia\t%0,%1 + th.lbuib\t%0,%1 lbu\t%0,%1" - [(set_attr "move_type" "andi,load") + [(set_attr "move_type" "andi,load,load,load,load,load") (set_attr "mode" "")]) ;; @@ -1425,14 +1440,19 @@ (define_insn "zero_extendqi2" ;; .................... (define_insn "extendsidi2" - [(set (match_operand:DI 0 "register_operand" "=r,r") + [(set (match_operand:DI 0 "register_operand" "=r,r,r,r,r,r") (sign_extend:DI - (match_operand:SI 1 "nonimmediate_operand" " r,m")))] + (match_operand:SI 1 + "nonimmediate_operand" " r,Qmu,Qmr,Qma,Qmb,m")))] "TARGET_64BIT" "@ sext.w\t%0,%1 + th.lurw\t%0,%1 + th.lrw\t%0,%1 + th.lwia\t%0,%1 + th.lwib\t%0,%1 lw\t%0,%1" - [(set_attr "move_type" "move,load") + [(set_attr "move_type" "move,load,load,load,load,load") (set_attr "mode" "DI")]) (define_expand "extend2" @@ -1441,12 +1461,17 @@ (define_expand "extend2" "") (define_insn_and_split "*extend2" - [(set (match_operand:SUPERQI 0 "register_operand" "=r,r") + [(set (match_operand:SUPERQI 0 "register_operand" "=r,r,r,r,r,r") (sign_extend:SUPERQI - (match_operand:SHORT 1 "nonimmediate_operand" " r,m")))] + (match_operand:SHORT 1 + "nonimmediate_operand" " r,Qmu,Qmr,Qma,Qmb,m")))] "!TARGET_ZBB" "@ # + th.lur\t%0,%1 + th.lr\t%0,%1 + th.lia\t%0,%1 + th.lib\t%0,%1 l\t%0,%1" "&& reload_completed && REG_P (operands[1]) @@ -1459,7 +1484,7 @@ (define_insn_and_split "*extend2" operands[2] = GEN_INT (GET_MODE_BITSIZE (SImode) - GET_MODE_BITSIZE (mode)); } - [(set_attr "move_type" "shift_shift,load") + [(set_attr "move_type" "shift_shift,load,load,load,load,load") (set_attr "mode" "SI")]) (define_insn "extendhfsf2" @@ -1507,7 +1532,8 @@ (define_insn "*movhf_hardfloat" && (register_operand (operands[0], HFmode) || reg_or_0_operand (operands[1], HFmode))" { return riscv_output_move (operands[0], operands[1]); } - [(set_attr "move_type" "fmove,mtc,fpload,fpstore,store,mtc,mfc,move,load,store") + [(set_attr "move_type" + "fmove,mtc,fpload,fpstore,store,mtc,mfc,move,load,store") (set_attr "mode" "HF")]) (define_insn "*movhf_softfloat" @@ -1836,7 +1862,8 @@ (define_insn "*movsf_hardfloat" && (register_operand (operands[0], SFmode) || reg_or_0_operand (operands[1], SFmode))" { return riscv_output_move (operands[0], operands[1]); } - [(set_attr "move_type" "fmove,mtc,fpload,fpstore,store,mtc,mfc,move,load,store") + [(set_attr "move_type" + "fmove,mtc,fpload,fpstore,store,mtc,mfc,move,load,store") (set_attr "mode" "SF")]) (define_insn "*movsf_softfloat" @@ -1860,7 +1887,6 @@ (define_expand "movdf" DONE; }) - ;; In RV32, we lack fmv.x.d and fmv.d.x. Go through memory instead. ;; (However, we can still use fcvt.d.w to zero a floating-point register.) (define_insn "*movdf_hardfloat_rv32" @@ -1870,7 +1896,8 @@ (define_insn "*movdf_hardfloat_rv32" && (register_operand (operands[0], DFmode) || reg_or_0_operand (operands[1], DFmode))" { return riscv_output_move (operands[0], operands[1]); } - [(set_attr "move_type" "fmove,mtc,fpload,fpstore,store,mtc,mfc,move,load,store") + [(set_attr "move_type" + "fmove,mtc,fpload,fpstore,store,mtc,mfc,move,load,store") (set_attr "mode" "DF")]) (define_insn "*movdf_hardfloat_rv64" @@ -1880,7 +1907,8 @@ (define_insn "*movdf_hardfloat_rv64" && (register_operand (operands[0], DFmode) || reg_or_0_operand (operands[1], DFmode))" { return riscv_output_move (operands[0], operands[1]); } - [(set_attr "move_type" "fmove,mtc,fpload,fpstore,store,mtc,mfc,move,load,store") + [(set_attr "move_type" + "fmove,mtc,fpload,fpstore,store,mtc,mfc,move,load,store") (set_attr "mode" "DF")]) (define_insn "*movdf_softfloat" @@ -2187,7 +2215,7 @@ (define_split (and:GPR (match_operand:GPR 1 "register_operand") (match_operand:GPR 2 "p2m1_shift_operand"))) (clobber (match_operand:GPR 3 "register_operand"))] - "" + "!TARGET_XTHEADMEMIDX" [(set (match_dup 3) (ashift:GPR (match_dup 1) (match_dup 2))) (set (match_dup 0) diff --git a/gcc/testsuite/gcc.target/riscv/xtheadmemidx-ldi-sdi.c b/gcc/testsuite/gcc.target/riscv/xtheadmemidx-ldi-sdi.c new file mode 100644 index 00000000000..8d785e62416 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/xtheadmemidx-ldi-sdi.c @@ -0,0 +1,72 @@ +/* { dg-do compile } */ +/* { dg-skip-if "" { *-*-* } { "-O0" "-O1" "-g" "-Oz" "-Os"} } */ +/* { dg-options "-march=rv64gc_xtheadmemidx --save-temps -O2" { target { rv64 } } } */ +/* { dg-options "-march=rv32gc_xtheadmemidx --save-temps -O2" { target { rv32 } } } */ + +#include "xtheadmemidx-macros.h" + +/* no special function attribute required */ +#define ATTR /* */ + +POST_LOAD(s_char, ATTR) +/* { dg-final { scan-assembler "th.lbia.*1,0" } } */ +PRE_LOAD(s_char, ATTR) +/* { dg-final { scan-assembler "th.lbib.*1,0" } } */ +POST_LOAD(char, ATTR) +/* { dg-final { scan-assembler "th.lbuia.*1,0" } } */ +PRE_LOAD(char, ATTR) +/* { dg-final { scan-assembler "th.lbuib.*1,0" } } */ +POST_LOAD(short, ATTR) +/* { dg-final { scan-assembler "th.lhia.*2,0" } } */ +PRE_LOAD(short, ATTR) +/* { dg-final { scan-assembler "th.lhib.*2,0" } } */ +POST_LOAD(u_short, ATTR) +/* { dg-final { scan-assembler "th.lhuia.*2,0" } } */ +PRE_LOAD(u_short, ATTR) +/* { dg-final { scan-assembler "th.lhuib.*2,0" } } */ + +POST_LOAD(int, ATTR) +/* { dg-final { scan-assembler "th.lwia.*4,0" } } */ +PRE_LOAD(int, ATTR) +/* { dg-final { scan-assembler "th.lwib.*4,0" } } */ +void int_post_load_lwuia (void* p) +{ + extern void fint2 (int*,u_ll); + u_int *q = (u_int*)p; + u_ll x = *q++; + fint2 (q, x); +} +/* { dg-final { scan-assembler "th.lwuia.*4,0" { target { rv64 } } } } */ +void int_pre_load_lwuib (void* p) +{ + extern void fint2 (int*,u_ll); + u_int *q = (u_int*)p; + u_ll x = *++q; + fint2 (q, x); +} +/* { dg-final { scan-assembler "th.lwuib.*4,0" { target { rv64 } } } } */ + +POST_LOAD(s_ll, ATTR) +/* { dg-final { scan-assembler "th.ldia.*8,0" { target { rv64 } } } } */ +PRE_LOAD(s_ll, ATTR) +/* { dg-final { scan-assembler "th.ldib.*8,0" { target { rv64 } } } } */ + +POST_STORE(char, ATTR) +/* { dg-final { scan-assembler "th.sbia.*1,0" } } */ +PRE_STORE(char, ATTR) +/* { dg-final { scan-assembler "th.sbib.*1,0" } } */ +POST_STORE(short, ATTR) +/* { dg-final { scan-assembler "th.shia.*2,0" } } */ +PRE_STORE(short, ATTR) +/* { dg-final { scan-assembler "th.shib.*2,0" } } */ +POST_STORE(int, ATTR) +/* { dg-final { scan-assembler "th.swia.*4,0" } } */ +PRE_STORE(int, ATTR) +/* { dg-final { scan-assembler "th.swib.*4,0" } } */ +POST_STORE(s_ll, ATTR) +/* { dg-final { scan-assembler "th.sdia.*8,0" { target { rv64 } } } } */ +PRE_STORE(s_ll, ATTR) +/* { dg-final { scan-assembler "th.sdib.*8,0" { target { rv64 } } } } */ + +/* { dg-final { scan-assembler-not "\taddi" { target { rv64 } } } } */ +/* { dg-final { cleanup-saved-temps } } */ diff --git a/gcc/testsuite/gcc.target/riscv/xtheadmemidx-ldr-str-32.c b/gcc/testsuite/gcc.target/riscv/xtheadmemidx-ldr-str-32.c new file mode 100644 index 00000000000..6061eaf1d9a --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/xtheadmemidx-ldr-str-32.c @@ -0,0 +1,23 @@ +/* { dg-do compile } */ +/* { dg-skip-if "" { *-*-* } { "-O0" "-O1" "-g" "-Oz" "-Os"} } */ +/* { dg-options "-march=rv32gc_xtheadmemidx" { target { rv32 } } } */ + +#include "xtheadmemidx-macros.h" + +MV_LOAD_1_AND_2(s_int, s_char) +/* { dg-final { scan-assembler-times "th.lrb\t" 2 { target { rv32 } } } } */ +MV_LOAD_1_AND_2(s_int, u_char) +/* { dg-final { scan-assembler-times "th.lrbu\t" 2 { target { rv32 } } } } */ +MV_LOAD_1_AND_2(s_int, s_short) +/* { dg-final { scan-assembler-times "th.lrh\t" 2 { target { rv32 } } } } */ +MV_LOAD_1_AND_2(s_int, u_short) +/* { dg-final { scan-assembler-times "th.lrhu\t" 2 { target { rv32 } } } } */ +MV_LOAD_1_AND_2(s_int, s_int) +/* { dg-final { scan-assembler-times "th.lrw\t" 2 { target { rv32 } } } } */ + +MV_STORE_1(s_int, s_int, s_char) +/* { dg-final { scan-assembler-times "th.srb\t" 1 { target { rv32 } } } } */ +MV_STORE_1(s_int, s_int, s_short) +/* { dg-final { scan-assembler-times "th.srh\t" 1 { target { rv32 } } } } */ +MV_STORE_1(s_int, s_int, s_int) +/* { dg-final { scan-assembler-times "th.srw\t" 1 { target { rv32 } } } } */ diff --git a/gcc/testsuite/gcc.target/riscv/xtheadmemidx-ldr-str-64.c b/gcc/testsuite/gcc.target/riscv/xtheadmemidx-ldr-str-64.c new file mode 100644 index 00000000000..080d1853c83 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/xtheadmemidx-ldr-str-64.c @@ -0,0 +1,53 @@ +/* { dg-do compile } */ +/* { dg-skip-if "" { *-*-* } { "-O0" "-O1" "-g" "-Oz" "-Os"} } */ +/* { dg-options "-march=rv64gc_xtheadmemidx" { target { rv64 } } } */ + +#include "xtheadmemidx-macros.h" + +MV_LOAD_1_AND_2(s_ll, s_char) +/* { dg-final { scan-assembler-times "th.lrb\t" 2 { target { rv64 } } } } */ +MV_LOAD_1_AND_2(s_ll, u_char) +/* { dg-final { scan-assembler-times "th.lrbu\t" 2 { target { rv64 } } } } */ +MV_LOAD_1_AND_2(s_ll, s_short) +/* { dg-final { scan-assembler-times "th.lrh\t" 2 { target { rv64 } } } } */ +MV_LOAD_1_AND_2(s_ll, u_short) +/* { dg-final { scan-assembler-times "th.lrhu\t" 2 { target { rv64 } } } } */ +MV_LOAD_1_AND_2(s_ll, s_int) +/* { dg-final { scan-assembler-times "th.lrw\t" 2 { target { rv64 } } } } */ +MV_LOAD_4(s_ll, s_int, u_int) +/* { dg-final { scan-assembler-times "th.lrwu\t" 1 { target { rv64 } } } } */ +MV_LOAD_1_AND_2(s_ll, s_ll) +/* { dg-final { scan-assembler-times "th.lrd\t" 2 { target { rv64 } } } } */ + +MV_STORE_1(s_ll, s_ll, s_char) +/* { dg-final { scan-assembler-times "th.srb\t" 1 { target { rv64 } } } } */ +MV_STORE_1(s_ll, s_ll, s_short) +/* { dg-final { scan-assembler-times "th.srh\t" 1 { target { rv64 } } } } */ +MV_STORE_1(s_ll, s_ll, s_int) +/* { dg-final { scan-assembler-times "th.srw\t" 1 { target { rv64 } } } } */ +MV_STORE_1(s_ll, s_ll, s_ll) +/* { dg-final { scan-assembler-times "th.srd\t" 1 { target { rv64 } } } } */ + +MV_LOAD_3(s_ll, u_int, s_char) +/* { dg-final { scan-assembler-times "th.lurb\t" 1 { target { rv64 } } } } */ +MV_LOAD_3(s_ll, u_int, u_char) +/* { dg-final { scan-assembler-times "th.lurbu\t" 1 { target { rv64 } } } } */ +MV_LOAD_3(s_ll, u_int, s_short) +/* { dg-final { scan-assembler-times "th.lurh\t" 1 { target { rv64 } } } } */ +MV_LOAD_3(s_ll, u_int, u_short) +/* { dg-final { scan-assembler-times "th.lurhu\t" 1 { target { rv64 } } } } */ +MV_LOAD_3(s_ll, u_int, s_int) +/* { dg-final { scan-assembler-times "th.lurw\t" 1 { target { rv64 } } } } */ +MV_LOAD_4(s_ll, u_int, u_int) +/* { dg-final { scan-assembler-times "th.lurwu\t" 1 { target { rv64 } } } } */ +MV_LOAD_3(s_ll, u_int, u_ll) +/* { dg-final { scan-assembler-times "th.lurd\t" 1 { target { rv64 } } } } */ + +MV_STORE_1(s_ll, u_int, s_char) +/* { dg-final { scan-assembler-times "th.surb\t" 1 { target { rv64 } } } } */ +MV_STORE_1(s_ll, u_int, s_short) +/* { dg-final { scan-assembler-times "th.surh\t" 1 { target { rv64 } } } } */ +MV_STORE_1(s_ll, u_int, s_int) +/* { dg-final { scan-assembler-times "th.surw\t" 1 { target { rv64 } } } } */ +MV_STORE_1(s_ll, u_int, s_ll) +/* { dg-final { scan-assembler-times "th.surd\t" 1 { target { rv64 } } } } */ diff --git a/gcc/testsuite/gcc.target/riscv/xtheadmemidx-macros.h b/gcc/testsuite/gcc.target/riscv/xtheadmemidx-macros.h new file mode 100644 index 00000000000..848e4770964 --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/xtheadmemidx-macros.h @@ -0,0 +1,110 @@ +typedef unsigned char u_char; +typedef signed char s_char; +typedef unsigned short u_short; +typedef signed short s_short; +typedef unsigned int u_int; +typedef signed int s_int; +typedef unsigned int u_ll __attribute__((mode(DI))); +typedef signed int s_ll __attribute__((mode(DI))); + +#include "stdint.h" + +#define PRE_STORE(T, ATTR) \ + ATTR T * \ + T ## _pre_store (T *p, T v) \ + { \ + *++p = v; \ + return p; \ + } + +#define POST_STORE(T, ATTR) \ + ATTR T * \ + T ## _post_store (T *p, T v) \ + { \ + *p++ = v; \ + return p; \ + } + +#define POST_STORE_VEC(T, VT, OP, ATTR) \ + ATTR T * \ + VT ## _post_store (T * p, VT v) \ + { \ + OP (p, v); \ + p += sizeof (VT) / sizeof (T); \ + return p; \ + } + +#define PRE_LOAD(T, ATTR) \ + ATTR void \ + T ## _pre_load (T *p) \ + { \ + ATTR extern void f ## T (T*,T); \ + T x = *++p; \ + f ## T (p, x); \ + } + +#define POST_LOAD(T, ATTR) \ + ATTR void \ + T ## _post_load (T *p) \ + { \ + ATTR extern void f ## T (T*,T); \ + T x = *p++; \ + f ## T (p, x); \ + } + +#define POST_LOAD_VEC(T, VT, OP, ATTR) \ + ATTR void \ + VT ## _post_load (T * p) \ + { \ + ATTR extern void f ## T (T*,T); \ + VT x = OP (p, v); \ + p += sizeof (VT) / sizeof (T); \ + f ## T (p, x); \ + } + +#define MV_LOAD_1(RS2_TYPE, RET_TYPE) \ + RET_TYPE \ + mv_load_1_ ## RS2_TYPE ## _ ## RET_TYPE (RS2_TYPE *a, int b) \ + { \ + return a[b]; \ + } + +#define MV_LOAD_2(RS2_TYPE, RET_TYPE) \ + RET_TYPE \ + mv_load_2_ ## RS2_TYPE ## _ ## RET_TYPE ( \ + RS2_TYPE rs1, RS2_TYPE rs2, int a) \ + { \ + return (*((RET_TYPE*)(uintptr_t)(rs1 + (rs2 << a)))); \ + } + +#define MV_LOAD_3(RS2_TYPE, CONV_TYPE, RET_TYPE) \ + RET_TYPE \ + mv_load_3_ ## RS2_TYPE ## _ ## CONV_TYPE ## _ ## RET_TYPE ( \ + RS2_TYPE rs1, RS2_TYPE rs2, int a) \ + { \ + CONV_TYPE c = (CONV_TYPE) rs2; \ + return (*((RET_TYPE*)(uintptr_t)(rs1 + (c << a)))); \ + } + +#define MV_LOAD_4(RS2_TYPE, CONV_TYPE, TMP_RET_TYPE) \ + uintptr_t \ + mv_load_4_ ## RS2_TYPE ## _ ## CONV_TYPE ## _ ## TMP_RET_TYPE ( \ + RS2_TYPE rs1, RS2_TYPE rs2, int a) \ + { \ + CONV_TYPE c = (CONV_TYPE) rs2; \ + return (*((TMP_RET_TYPE*)(uintptr_t)(rs1 + (c << a)))); \ + } + +#define MV_STORE_1(RS2_TYPE, CONV_TYPE, STORE_TYPE) \ + void \ + mv_store_1_ ## RS2_TYPE ## _ ## CONV_TYPE ## _ ## STORE_TYPE ( \ + RS2_TYPE rs1, RS2_TYPE rs2, int a, int st_val) \ + { \ + CONV_TYPE c = (CONV_TYPE) rs2; \ + STORE_TYPE* addr = (STORE_TYPE*)(uintptr_t)(rs1 + (c << a)); \ + *addr = st_val; \ + } + +#define MV_LOAD_1_AND_2(RS2_TYPE, RET_TYPE) \ + MV_LOAD_1(RS2_TYPE, RET_TYPE) \ + MV_LOAD_2(RS2_TYPE, RET_TYPE)