From patchwork Wed Mar 6 13:44:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Robin Dapp X-Patchwork-Id: 86875 X-Patchwork-Delegate: juzhe.zhong@rivai.ai Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 6D0133857C4C for ; Wed, 6 Mar 2024 13:44:41 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com [IPv6:2a00:1450:4864:20::62e]) by sourceware.org (Postfix) with ESMTPS id 4E0C73858032 for ; Wed, 6 Mar 2024 13:44:10 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 4E0C73858032 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 4E0C73858032 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2a00:1450:4864:20::62e ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1709732653; cv=none; b=BHbZ4wwX4DKcg7YaLqjIFnG976af9ClKLZElmGnF0wujuRQdKK0OAMLnxq3/KL4IUZKZepvfbcaJbokbe1bsHGZoQZA07XBCbXx/1fcxvIUmB6IthccMxS7CPtxqO56nWAo1mLPiF+vgDZP330ipLycLRwQForm2Xu3/MI8Vydc= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1709732653; c=relaxed/simple; bh=+Y/x+ofA6XWvtxtFHEuxY3pjk0uAC3cqBld7QvvN5CM=; h=DKIM-Signature:Message-ID:Date:MIME-Version:To:From:Subject; b=tLbjEkaXWl4dGoo4xF2b1MOZFKkyOHbSzR0D8+JTGsHJtjfhnToW9VPMR75IZJ3hF67bmE1C+kaBF+J68PDFCZ1iRtKESOa+GNZJB9kg/o7XX8ycn0FEyrKyg8RiaXTqdaqvKqho3lpAZrBGatHIjPg4+Ed4D89yb5ZisCC+g9Y= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-ej1-x62e.google.com with SMTP id a640c23a62f3a-a458eb7db13so137812666b.2 for ; Wed, 06 Mar 2024 05:44:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1709732648; x=1710337448; darn=gcc.gnu.org; h=content-transfer-encoding:subject:from:to:content-language:cc :user-agent:mime-version:date:message-id:from:to:cc:subject:date :message-id:reply-to; bh=z7OkiLWpzxeG2t1Jmf3/z7DSVIg4NQbB229Nxxf8NeA=; b=hCa2fkJfXeRT4sCwHNlrdwBnoCz3+ROHgRUvhvZ2oN652ivFtEaPhc7xt77VgQmTrT ABlMTVF5k4NzS+WZ5qo+gEShjo3eQjZEBWyHVvqhyvWCioAgyaqjm6pljk81G7BRJghA V3Y8ZhZ4zLBjW8s1bL7q77jHJSrSCAi/gB1BITv5C2Si4jKJH8RLCbCgxH8mtUBrghTF se2ftwo7/qrBrWIycpQI0A59fkQde460LIiXX1qvwZ3R6wnmkx+CRyst9uIgEdp7wwUW jjd3ZwpvwP9P5wFU6EP83Q4N3SvS6wifi4cQ457uhWVCfzNdU7luAMqy9/e3b3twWoDm rjMA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1709732648; x=1710337448; h=content-transfer-encoding:subject:from:to:content-language:cc :user-agent:mime-version:date:message-id:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=z7OkiLWpzxeG2t1Jmf3/z7DSVIg4NQbB229Nxxf8NeA=; b=wZJtqtemJnV+mlVWxqflVcN8c8i5q8op6EIXIpTJ7hG0Sut7ofjuGS7l4tyWyoNhh9 HV9u6svZ3Cl65W6IiI/l+ai5KkfouRvi9Pc2lcFFm55kUk916jkBZuKz8AxFqKbxEMMP yvYXMxRUwB0KWysmJS2JNh4yDy0Gm3f4UXJmeXFAVFEChh8XEWr20k+sJEpbrHauppnn qdcBCeFGix39dZcOcOg3utK/2mYBtHVadFSxDGz/6XlI8Rf1ZNn+0yLe7Gi4shjT1L9Q bV+Mm1YlgVlY+cPV8TjH2HUwLFMm9D5yTxBF2AeFPAv8e0niGl0B5Ny7Qh1nkPdJpuDa 3skA== X-Gm-Message-State: AOJu0YzQnCGX6P8c81yEQBE0hy4ehlc/Ltzk2kG4C6YQJM8qY/wZlqmM 88bAwzbEze0eBeDN6ODVL99dCwcj9YtmaM02plcZBx1ACM1Dx4xZvFTGxQtw X-Google-Smtp-Source: AGHT+IGvAavgs2kJI++JzIJ0IP5btenwUUCX0q1ZBP2k/n8sBIqpEjpacfl6nkmT8OwdnjNRV/I1kQ== X-Received: by 2002:a17:906:4087:b0:a45:ad81:4ab3 with SMTP id u7-20020a170906408700b00a45ad814ab3mr3263493ejj.54.1709732647674; Wed, 06 Mar 2024 05:44:07 -0800 (PST) Received: from [192.168.1.23] (ip-149-172-150-237.um42.pools.vodafone-ip.de. [149.172.150.237]) by smtp.gmail.com with ESMTPSA id ub13-20020a170907c80d00b00a4406929637sm7184350ejc.162.2024.03.06.05.44.06 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Wed, 06 Mar 2024 05:44:07 -0800 (PST) Message-ID: <5eacee19-243d-418f-88c1-09c42773d498@gmail.com> Date: Wed, 6 Mar 2024 14:44:06 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Cc: rdapp.gcc@gmail.com, jeffreyalaw Content-Language: en-US To: gcc-patches , palmer , Kito Cheng , "juzhe.zhong@rivai.ai" From: Robin Dapp Subject: [PATCH] RISC-V: Use vmv1r.v instead of vmv.v.v for fma output reloads [PR114200]. X-Spam-Status: No, score=-9.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Hi, three-operand instructions like vmacc are modeled with an implicit output reload when the output does not match one of the operands. For this we use vmv.v.v which is subject to length masking. In a situation where the current vl is less than the full vlenb and the fma's result value is used as input for a vector reduction (which is never length masked) we effectively only reduce vl elements. The masked-out elements are relevant for the reduction, though, leading to a wrong result. This patch replaces the vmv reloads by full-register reloads. Regtested on rv64, rv32 is running. Regards Robin gcc/ChangeLog: PR target/114200 PR target/114202 * config/riscv/vector.md: Use vmv[1248]r.v instead of vmv.v.v. gcc/testsuite/ChangeLog: * gcc.target/riscv/rvv/autovec/pr114200.c: New test. * gcc.target/riscv/rvv/autovec/pr114202.c: New test. --- gcc/config/riscv/vector.md | 96 +++++++++---------- .../gcc.target/riscv/rvv/autovec/pr114200.c | 18 ++++ .../gcc.target/riscv/rvv/autovec/pr114202.c | 20 ++++ 3 files changed, 86 insertions(+), 48 deletions(-) create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/pr114200.c create mode 100644 gcc/testsuite/gcc.target/riscv/rvv/autovec/pr114202.c diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md index f89f9c2fa86..8b1c24c5d79 100644 --- a/gcc/config/riscv/vector.md +++ b/gcc/config/riscv/vector.md @@ -5351,10 +5351,10 @@ (define_insn "*pred_mul_plus_undef" "@ vmadd.vv\t%0,%4,%5%p1 vmacc.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%4\;vmacc.vv\t%0,%3,%4%p1 + vmv%m4r.v\t%0,%4\;vmacc.vv\t%0,%3,%4%p1 vmadd.vv\t%0,%4,%5%p1 vmacc.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%5\;vmacc.vv\t%0,%3,%4%p1" + vmv%m5r.v\t%0,%5\;vmacc.vv\t%0,%3,%4%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "")]) @@ -5378,9 +5378,9 @@ (define_insn "*pred_madd" "TARGET_VECTOR" "@ vmadd.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%2\;vmadd.vv\t%0,%3,%4%p1 + vmv%m2r.v\t%0,%2\;vmadd.vv\t%0,%3,%4%p1 vmadd.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%2\;vmadd.vv\t%0,%3,%4%p1" + vmv%m2r.v\t%0,%2\;vmadd.vv\t%0,%3,%4%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "2") @@ -5409,9 +5409,9 @@ (define_insn "*pred_macc" "TARGET_VECTOR" "@ vmacc.vv\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vmacc.vv\t%0,%2,%3%p1 + vmv%m4r.v\t%0,%4;vmacc.vv\t%0,%2,%3%p1 vmacc.vv\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vmacc.vv\t%0,%2,%3%p1" + vmv%m4r.v\t%0,%4\;vmacc.vv\t%0,%2,%3%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "4") @@ -5462,9 +5462,9 @@ (define_insn "*pred_madd_scalar" "TARGET_VECTOR" "@ vmadd.vx\t%0,%2,%4%p1 - vmv.v.v\t%0,%3\;vmadd.vx\t%0,%2,%4%p1 + vmv%m3r.v\t%0,%3\;vmadd.vx\t%0,%2,%4%p1 vmadd.vx\t%0,%2,%4%p1 - vmv.v.v\t%0,%3\;vmadd.vx\t%0,%2,%4%p1" + vmv%m3r.v\t%0,%3\;vmadd.vx\t%0,%2,%4%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "3") @@ -5494,9 +5494,9 @@ (define_insn "*pred_macc_scalar" "TARGET_VECTOR" "@ vmacc.vx\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1 + vmv%m4r.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1 vmacc.vx\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1" + vmv%m4r.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "4") @@ -5562,9 +5562,9 @@ (define_insn "*pred_madd_extended_scalar" "TARGET_VECTOR && !TARGET_64BIT" "@ vmadd.vx\t%0,%2,%4%p1 - vmv.v.v\t%0,%2\;vmadd.vx\t%0,%2,%4%p1 + vmv%m2r.v\t%0,%2\;vmadd.vx\t%0,%2,%4%p1 vmadd.vx\t%0,%2,%4%p1 - vmv.v.v\t%0,%2\;vmadd.vx\t%0,%2,%4%p1" + vmv%m2r.v\t%0,%2\;vmadd.vx\t%0,%2,%4%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "3") @@ -5595,9 +5595,9 @@ (define_insn "*pred_macc_extended_scalar" "TARGET_VECTOR && !TARGET_64BIT" "@ vmacc.vx\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1 + vmv%m4r.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1 vmacc.vx\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1" + vmv%m4r.v\t%0,%4\;vmacc.vx\t%0,%2,%3%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "4") @@ -5649,10 +5649,10 @@ (define_insn "*pred_minus_mul_undef" "@ vnmsub.vv\t%0,%4,%5%p1 vnmsac.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%3\;vnmsub.vv\t%0,%4,%5%p1 + vmv%m3r.v\t%0,%3\;vnmsub.vv\t%0,%4,%5%p1 vnmsub.vv\t%0,%4,%5%p1 vnmsac.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%3\;vnmsub.vv\t%0,%4,%5%p1" + vmv%m3r.v\t%0,%3\;vnmsub.vv\t%0,%4,%5%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "")]) @@ -5676,9 +5676,9 @@ (define_insn "*pred_nmsub" "TARGET_VECTOR" "@ vnmsub.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%2\;vnmsub.vv\t%0,%3,%4%p1 + vmv%m2r.v\t%0,%2\;vnmsub.vv\t%0,%3,%4%p1 vnmsub.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%2\;vnmsub.vv\t%0,%3,%4%p1" + vmv%m2r.v\t%0,%2\;vnmsub.vv\t%0,%3,%4%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "2") @@ -5707,9 +5707,9 @@ (define_insn "*pred_nmsac" "TARGET_VECTOR" "@ vnmsac.vv\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vnmsac.vv\t%0,%2,%3%p1 + vmv%m4r.v\t%0,%4\;vnmsac.vv\t%0,%2,%3%p1 vnmsac.vv\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vnmsac.vv\t%0,%2,%3%p1" + vmv%m4r.v\t%0,%4\;vnmsac.vv\t%0,%2,%3%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "4") @@ -5760,9 +5760,9 @@ (define_insn "*pred_nmsub_scalar" "TARGET_VECTOR" "@ vnmsub.vx\t%0,%2,%4%p1 - vmv.v.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1 + vmv%m3r.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1 vnmsub.vx\t%0,%2,%4%p1 - vmv.v.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1" + vmv%m3r.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "3") @@ -5792,9 +5792,9 @@ (define_insn "*pred_nmsac_scalar" "TARGET_VECTOR" "@ vnmsac.vx\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1 + vmv%m4r.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1 vnmsac.vx\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1" + vmv%m4r.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "4") @@ -5860,9 +5860,9 @@ (define_insn "*pred_nmsub_extended_scalar" "TARGET_VECTOR && !TARGET_64BIT" "@ vnmsub.vx\t%0,%2,%4%p1 - vmv.v.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1 + vmv%m3r.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1 vnmsub.vx\t%0,%2,%4%p1 - vmv.v.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1" + vmv%m3r.v\t%0,%3\;vnmsub.vx\t%0,%2,%4%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "3") @@ -5893,9 +5893,9 @@ (define_insn "*pred_nmsac_extended_scalar" "TARGET_VECTOR && !TARGET_64BIT" "@ vnmsac.vx\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1 + vmv%m4r.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1 vnmsac.vx\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1" + vmv%m4r.v\t%0,%4\;vnmsac.vx\t%0,%2,%3%p1" [(set_attr "type" "vimuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "4") @@ -6555,10 +6555,10 @@ (define_insn "*pred_mul__undef" "@ vf.vv\t%0,%4,%5%p1 vf.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%3\;vf.vv\t%0,%4,%5%p1 + vmv%m3r.v\t%0,%3\;vf.vv\t%0,%4,%5%p1 vf.vv\t%0,%4,%5%p1 vf.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%3\;vf.vv\t%0,%4,%5%p1" + vmv%m3r.v\t%0,%3\;vf.vv\t%0,%4,%5%p1" [(set_attr "type" "vfmuladd") (set_attr "mode" "") (set (attr "frm_mode") @@ -6586,9 +6586,9 @@ (define_insn "*pred_" "TARGET_VECTOR" "@ vf.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%2\;vf.vv\t%0,%3,%4%p1 + vmv%m2r.v\t%0,%2\;vf.vv\t%0,%3,%4%p1 vf.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%2\;vf.vv\t%0,%3,%4%p1" + vmv%m2r.v\t%0,%2\;vf.vv\t%0,%3,%4%p1" [(set_attr "type" "vfmuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "2") @@ -6621,9 +6621,9 @@ (define_insn "*pred_" "TARGET_VECTOR" "@ vf.vv\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vf.vv\t%0,%2,%3%p1 + vmv%m4r.v\t%0,%4\;vf.vv\t%0,%2,%3%p1 vf.vv\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vf.vv\t%0,%2,%3%p1" + vmv%m4r.v\t%0,%4\;vf.vv\t%0,%2,%3%p1" [(set_attr "type" "vfmuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "4") @@ -6680,9 +6680,9 @@ (define_insn "*pred__scalar" "TARGET_VECTOR" "@ vf.vf\t%0,%2,%4%p1 - vmv.v.v\t%0,%3\;vf.vf\t%0,%2,%4%p1 + vmv%m3r.v\t%0,%3\;vf.vf\t%0,%2,%4%p1 vf.vf\t%0,%2,%4%p1 - vmv.v.v\t%0,%3\;vf.vf\t%0,%2,%4%p1" + vmv%m3r.v\t%0,%3\;vf.vf\t%0,%2,%4%p1" [(set_attr "type" "vfmuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "3") @@ -6716,9 +6716,9 @@ (define_insn "*pred__scalar" "TARGET_VECTOR" "@ vf.vf\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vf.vf\t%0,%2,%3%p1 + vmv%m4r.v\t%0,%4\;vf.vf\t%0,%2,%3%p1 vf.vf\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vf.vf\t%0,%2,%3%p1" + vmv%m4r.v\t%0,%4\;vf.vf\t%0,%2,%3%p1" [(set_attr "type" "vfmuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "4") @@ -6778,10 +6778,10 @@ (define_insn "*pred_mul_neg__undef" "@ vf.vv\t%0,%4,%5%p1 vf.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%3\;vf.vv\t%0,%4,%5%p1 + vmv%m3r.v\t%0,%3\;vf.vv\t%0,%4,%5%p1 vf.vv\t%0,%4,%5%p1 vf.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%3\;vf.vv\t%0,%4,%5%p1" + vmv%m3r.v\t%0,%3\;vf.vv\t%0,%4,%5%p1" [(set_attr "type" "vfmuladd") (set_attr "mode" "") (set (attr "frm_mode") @@ -6810,9 +6810,9 @@ (define_insn "*pred_" "TARGET_VECTOR" "@ vf.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%2\;vf.vv\t%0,%3,%4%p1 + vmv%m2r.v\t%0,%2\;vf.vv\t%0,%3,%4%p1 vf.vv\t%0,%3,%4%p1 - vmv.v.v\t%0,%2\;vf.vv\t%0,%3,%4%p1" + vmv%m2r.v\t%0,%2\;vf.vv\t%0,%3,%4%p1" [(set_attr "type" "vfmuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "2") @@ -6846,9 +6846,9 @@ (define_insn "*pred_" "TARGET_VECTOR" "@ vf.vv\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vf.vv\t%0,%2,%3%p1 + vmv%m4r.v\t%0,%4\;vf.vv\t%0,%2,%3%p1 vf.vv\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vf.vv\t%0,%2,%3%p1" + vmv%m4r.v\t%0,%4\;vf.vv\t%0,%2,%3%p1" [(set_attr "type" "vfmuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "4") @@ -6907,9 +6907,9 @@ (define_insn "*pred__scalar" "TARGET_VECTOR" "@ vf.vf\t%0,%2,%4%p1 - vmv.v.v\t%0,%3\;vf.vf\t%0,%2,%4%p1 + vmv%m3r.v\t%0,%3\;vf.vf\t%0,%2,%4%p1 vf.vf\t%0,%2,%4%p1 - vmv.v.v\t%0,%3\;vf.vf\t%0,%2,%4%p1" + vmv%m3r.v\t%0,%3\;vf.vf\t%0,%2,%4%p1" [(set_attr "type" "vfmuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "3") @@ -6944,9 +6944,9 @@ (define_insn "*pred__scalar" "TARGET_VECTOR" "@ vf.vf\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vf.vf\t%0,%2,%3%p1 + vmv%m4r.v\t%0,%4\;vf.vf\t%0,%2,%3%p1 vf.vf\t%0,%2,%3%p1 - vmv.v.v\t%0,%4\;vf.vf\t%0,%2,%3%p1" + vmv%m4r.v\t%0,%4\;vf.vf\t%0,%2,%3%p1" [(set_attr "type" "vfmuladd") (set_attr "mode" "") (set_attr "merge_op_idx" "4") diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/pr114200.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/pr114200.c new file mode 100644 index 00000000000..23e37ca9b9f --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/pr114200.c @@ -0,0 +1,18 @@ +/* { dg-do run { target { riscv_v } } } */ +/* { dg-options { -march=rv64gcv -mabi=lp64d -O3 -fwrapv } } */ + +short a, e = 1; +_Bool b, d; +short c[300]; + +int main() { + for (int f = 0; f < 19; f++) { + for (int g = 0; g < 14; g++) + for (int h = 0; h < 10; h++) + a += c[g] + e; + b += d; + } + + if (a != 2660) + __builtin_abort (); +} diff --git a/gcc/testsuite/gcc.target/riscv/rvv/autovec/pr114202.c b/gcc/testsuite/gcc.target/riscv/rvv/autovec/pr114202.c new file mode 100644 index 00000000000..f743b08b7af --- /dev/null +++ b/gcc/testsuite/gcc.target/riscv/rvv/autovec/pr114202.c @@ -0,0 +1,20 @@ +/* { dg-do run { target { riscv_v } } } */ +/* { dg-options { -march=rv64gcv -mabi=lp64d -O3 -fwrapv } } */ + +signed char a = 0, d = 0; +_Bool b; +signed char c[324]; +int e; + +int main() { + c[63] = 50; + for (int f = 0; f < 9; f++) { + for (unsigned g = 0; g < 12; g++) + for (char h = 0; h < 8; h++) + e = a += c[g * 9]; + b = e ? d : 0; + } + + if (a != 16) + __builtin_abort (); +}