From patchwork Tue May 31 08:50:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "juzhe.zhong@rivai.ai" X-Patchwork-Id: 54557 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id EC6AA386F463 for ; Tue, 31 May 2022 09:01:56 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from smtpbg152.qq.com (smtpbg152.qq.com [13.245.186.79]) by sourceware.org (Postfix) with ESMTPS id A508438356B6 for ; Tue, 31 May 2022 08:52:29 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org A508438356B6 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=rivai.ai Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=rivai.ai X-QQ-mid: bizesmtp84t1653987144t9ofps4y Received: from server1.localdomain ( [42.247.22.65]) by bizesmtp.qq.com (ESMTP) with id ; Tue, 31 May 2022 16:52:23 +0800 (CST) X-QQ-SSF: 01400000002000B0F000000A0000000 X-QQ-FEAT: aFXyU9/pqzVgBsVaeyA/RInjUVma9GxVvksuVvy/5uF2plGX01VydmQTlNEcQ jF7XeqArZgc/Y0ZA0lNz9lCbSKgvj/tfsD0k9VnBv4F/JmmAmogDERc854QIz9UxJx8HzhP mqJE1lO/jdEEO7XWgc+eSVl2KolNCPgVGXfr3EWMEnGlLi/LkwOLhTzs77lAWTVBBMey1jw dAKH0bbhtL32isu8F1cTZ/tgv/y2zoQhOD0UUZHaEnLPBIhupYSSaecf1Ngyb1xNP/MokOM sZr6AMvqR3m6CaP29T7cBhsVZbCQhgQHfoh3Ei2gddhqcI4oqVTfp5X12LJAtyebI/cVEqV Dp74PU5mkef3Bcvzd/Ge3drufWQsjpwv9kJHTxqzNAkZl6lcXQ= X-QQ-GoodBg: 2 From: juzhe.zhong@rivai.ai To: gcc-patches@gcc.gnu.org Subject: [PATCH 18/21] Add rest intrinsic support Date: Tue, 31 May 2022 16:50:09 +0800 Message-Id: <20220531085012.269719-19-juzhe.zhong@rivai.ai> X-Mailer: git-send-email 2.36.1 In-Reply-To: <20220531085012.269719-1-juzhe.zhong@rivai.ai> References: <20220531085012.269719-1-juzhe.zhong@rivai.ai> MIME-Version: 1.0 X-QQ-SENDSIZE: 520 Feedback-ID: bizesmtp:rivai.ai:qybgforeign:qybgforeign4 X-QQ-Bgrelay: 1 X-Spam-Status: No, score=-11.8 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: kito.cheng@gmail.com, juzhe.zhong@rivai.ai Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" From: zhongjuzhe gcc/ChangeLog: * config/riscv/riscv-vector-builtins-functions.cc (reduceop::assemble_name): New function. (reduceop::get_argument_types): New function. (reduceop::get_mask_type): New function. (vsadd::expand): New function. (vsaddu::expand): New function. (vaadd::expand): New function. (vaaddu::expand): New function. (vssub::expand): New function. (vssubu::expand): New function. (vasub::expand): New function. (vasubu::expand): New function. (vssrl::expand): New function. (vssra::expand): New function. (vsmul::expand): New function. (vnclip::expand): New function. (vnclipu::expand): New function. (funop::call_properties): New function. (fbinop::call_properties): New function. (fwbinop::call_properties): New function. (fternop::call_properties): New function. (vfadd::expand): New function. (vfsub::expand): New function. (vfmul::expand): New function. (vfdiv::expand): New function. (vfrsub::expand): New function. (vfrdiv::expand): New function. (vfneg::expand): New function. (vfwadd::expand): New function. (vfwsub::expand): New function. (vfwmul::expand): New function. (vfmacc::expand): New function. (vfmsac::expand): New function. (vfnmacc::expand): New function. (vfnmsac::expand): New function. (vfmadd::expand): New function. (vfnmadd::expand): New function. (vfmsub::expand): New function. (vfnmsub::expand): New function. (vfwmacc::expand): New function. (vfwnmacc::expand): New function. (vfwmsac::expand): New function. (vfwnmsac::expand): New function. (vfsqrt::expand): New function. (vfrsqrt7::expand): New function. (vfrec7::expand): New function. (vfmax::expand): New function. (vfmin::expand): New function. (vfsgnj::expand): New function. (vfsgnjn::expand): New function. (vfsgnjx::expand): New function. (vfabs::expand): New function. (vfcmp::assemble_name): New function. (vmfeq::expand): New function. (vmfne::expand): New function. (vmflt::expand): New function. (vmfgt::expand): New function. (vmfle::expand): New function. (vmfge::expand): New function. (vfclass::expand): New function. (vfmerge::get_position_of_dest_arg): New function. (vfmerge::expand): New function. (vfmv::can_be_overloaded_p): New function. (vfmv::expand): New function. (vfcvt_f2i::assemble_name): New function. (vfcvt_f2i::expand): New function. (vfcvt_f2u::assemble_name): New function. (vfcvt_f2u::expand): New function. (vfcvt_rtz_f2i::assemble_name): New function. (vfcvt_rtz_f2i::expand): New function. (vfcvt_rtz_f2u::assemble_name): New function. (vfcvt_rtz_f2u::expand): New function. (vfcvt_i2f::assemble_name): New function. (vfcvt_i2f::expand): New function. (vfcvt_u2f::assemble_name): New function. (vfcvt_u2f::expand): New function. (vfwcvt_f2i::assemble_name): New function. (vfwcvt_f2i::expand): New function. (vfwcvt_f2u::assemble_name): New function. (vfwcvt_f2u::expand): New function. (vfwcvt_rtz_f2i::assemble_name): New function. (vfwcvt_rtz_f2i::expand): New function. (vfwcvt_rtz_f2u::assemble_name): New function. (vfwcvt_rtz_f2u::expand): New function. (vfwcvt_i2f::assemble_name): New function. (vfwcvt_i2f::expand): New function. (vfwcvt_u2f::assemble_name): New function. (vfwcvt_u2f::expand): New function. (vfwcvt_f2f::assemble_name): New function. (vfwcvt_f2f::expand): New function. (vfncvt_f2i::assemble_name): New function. (vfncvt_f2i::expand): New function. (vfncvt_f2u::assemble_name): New function. (vfncvt_f2u::expand): New function. (vfncvt_rtz_f2i::assemble_name): New function. (vfncvt_rtz_f2i::expand): New function. (vfncvt_rtz_f2u::assemble_name): New function. (vfncvt_rtz_f2u::expand): New function. (vfncvt_i2f::assemble_name): New function. (vfncvt_i2f::expand): New function. (vfncvt_u2f::assemble_name): New function. (vfncvt_u2f::expand): New function. (vfncvt_f2f::assemble_name): New function. (vfncvt_f2f::expand): New function. (vfncvt_f2rodf::assemble_name): New function. (vfncvt_f2rodf::expand): New function. (vredsum::expand): New function. (vredmax::expand): New function. (vredmaxu::expand): New function. (vredmin::expand): New function. (vredminu::expand): New function. (vredand::expand): New function. (vredor::expand): New function. (vredxor::expand): New function. (vwredsum::expand): New function. (vwredsumu::expand): New function. (freduceop::call_properties): New function. (vfredosum::expand): New function. (vfredusum::expand): New function. (vfredmax::expand): New function. (vfredmin::expand): New function. (vfwredosum::expand): New function. (vfwredusum::expand): New function. (vmand::expand): New function. (vmor::expand): New function. (vmxor::expand): New function. (vmnand::expand): New function. (vmnor::expand): New function. (vmxnor::expand): New function. (vmandn::expand): New function. (vmorn::expand): New function. (vmmv::expand): New function. (vmnot::expand): New function. (vmclr::get_argument_types): New function. (vmclr::can_be_overloaded_p): New function. (vmclr::expand): New function. (vmset::get_argument_types): New function. (vmset::can_be_overloaded_p): New function. (vmset::expand): New function. (vcpop::get_return_type): New function. (vcpop::expand): New function. (vfirst::get_return_type): New function. (vfirst::expand): New function. (vmsbf::expand): New function. (vmsif::expand): New function. (vmsof::expand): New function. (viota::can_be_overloaded_p): New function. (viota::expand): New function. (vid::get_argument_types): New function. (vid::can_be_overloaded_p): New function. (vid::expand): New function. (vmv_x_s::assemble_name): New function. (vmv_x_s::expand): New function. (vmv_s_x::assemble_name): New function. (vmv_s_x::expand): New function. (vfmv_f_s::assemble_name): New function. (vfmv_f_s::expand): New function. (vfmv_s_f::assemble_name): New function. (vfmv_s_f::expand): New function. (vslideup::expand): New function. (vslidedown::expand): New function. (vslide1up::expand): New function. (vslide1down::expand): New function. (vfslide1up::expand): New function. (vfslide1down::expand): New function. (vrgather::expand): New function. (vrgatherei16::expand): New function. (vcompress::get_position_of_dest_arg): New function. (vcompress::expand): New function. * config/riscv/riscv-vector-builtins-functions.def (vsadd): New macro define. (vsaddu): New macro define. (vaadd): New macro define. (vaaddu): New macro define. (vssub): New macro define. (vssubu): New macro define. (vasub): New macro define. (vasubu): New macro define. (vsmul): New macro define. (vssrl): New macro define. (vssra): New macro define. (vnclip): New macro define. (vnclipu): New macro define. (vfadd): New macro define. (vfsub): New macro define. (vfmul): New macro define. (vfdiv): New macro define. (vfrsub): New macro define. (vfrdiv): New macro define. (vfneg): New macro define. (vfwadd): New macro define. (vfwsub): New macro define. (vfwmul): New macro define. (vfmacc): New macro define. (vfmsac): New macro define. (vfnmacc): New macro define. (vfnmsac): New macro define. (vfmadd): New macro define. (vfnmadd): New macro define. (vfmsub): New macro define. (vfnmsub): New macro define. (vfwmacc): New macro define. (vfwmsac): New macro define. (vfwnmacc): New macro define. (vfwnmsac): New macro define. (vfsqrt): New macro define. (vfrsqrt7): New macro define. (vfrec7): New macro define. (vfmax): New macro define. (vfmin): New macro define. (vfsgnj): New macro define. (vfsgnjn): New macro define. (vfsgnjx): New macro define. (vfabs): New macro define. (vmfeq): New macro define. (vmfne): New macro define. (vmflt): New macro define. (vmfle): New macro define. (vmfgt): New macro define. (vmfge): New macro define. (vfclass): New macro define. (vfmerge): New macro define. (vfmv): New macro define. (vfcvt_x_f_v): New macro define. (vfcvt_xu_f_v): New macro define. (vfcvt_rtz_x_f_v): New macro define. (vfcvt_rtz_xu_f_v): New macro define. (vfcvt_f_x_v): New macro define. (vfcvt_f_xu_v): New macro define. (vfwcvt_x_f_v): New macro define. (vfwcvt_xu_f_v): New macro define. (vfwcvt_rtz_x_f_v): New macro define. (vfwcvt_rtz_xu_f_v): New macro define. (vfwcvt_f_x_v): New macro define. (vfwcvt_f_xu_v): New macro define. (vfwcvt_f_f_v): New macro define. (vfncvt_x_f_w): New macro define. (vfncvt_xu_f_w): New macro define. (vfncvt_rtz_x_f_w): New macro define. (vfncvt_rtz_xu_f_w): New macro define. (vfncvt_f_x_w): New macro define. (vfncvt_f_xu_w): New macro define. (vfncvt_f_f_w): New macro define. (vfncvt_rod_f_f_w): New macro define. (vredsum): New macro define. (vredmax): New macro define. (vredmaxu): New macro define. (vredmin): New macro define. (vredminu): New macro define. (vredand): New macro define. (vredor): New macro define. (vredxor): New macro define. (vwredsum): New macro define. (vwredsumu): New macro define. (vfredosum): New macro define. (vfredusum): New macro define. (vfredmax): New macro define. (vfredmin): New macro define. (vfwredosum): New macro define. (vfwredusum): New macro define. (vmand): New macro define. (vmor): New macro define. (vmxor): New macro define. (vmnand): New macro define. (vmnor): New macro define. (vmxnor): New macro define. (vmandn): New macro define. (vmorn): New macro define. (vmmv): New macro define. (vmnot): New macro define. (vmclr): New macro define. (vmset): New macro define. (vcpop): New macro define. (vfirst): New macro define. (vmsbf): New macro define. (vmsif): New macro define. (vmsof): New macro define. (viota): New macro define. (vid): New macro define. (vmv_x_s): New macro define. (vmv_s_x): New macro define. (vfmv_f_s): New macro define. (vfmv_s_f): New macro define. (vslideup): New macro define. (vslidedown): New macro define. (vslide1up): New macro define. (vslide1down): New macro define. (vfslide1up): New macro define. (vfslide1down): New macro define. (vrgather): New macro define. (vrgatherei16): New macro define. (vcompress): New macro define. * config/riscv/riscv-vector-builtins-functions.h (class reduceop): New class. (class vsadd): New class. (class vsaddu): New class. (class vaadd): New class. (class vaaddu): New class. (class vssub): New class. (class vssubu): New class. (class vasub): New class. (class vasubu): New class. (class vssrl): New class. (class vssra): New class. (class vsmul): New class. (class vnclip): New class. (class vnclipu): New class. (class funop): New class. (class fbinop): New class. (class fwbinop): New class. (class fternop): New class. (class vfadd): New class. (class vfsub): New class. (class vfmul): New class. (class vfdiv): New class. (class vfrsub): New class. (class vfrdiv): New class. (class vfneg): New class. (class vfwadd): New class. (class vfwsub): New class. (class vfwmul): New class. (class vfmacc): New class. (class vfmsac): New class. (class vfnmacc): New class. (class vfnmsac): New class. (class vfmadd): New class. (class vfnmadd): New class. (class vfmsub): New class. (class vfnmsub): New class. (class vfwmacc): New class. (class vfwmsac): New class. (class vfwnmacc): New class. (class vfwnmsac): New class. (class vfsqrt): New class. (class vfrsqrt7): New class. (class vfrec7): New class. (class vfmax): New class. (class vfmin): New class. (class vfsgnj): New class. (class vfsgnjn): New class. (class vfsgnjx): New class. (class vfabs): New class. (class vfcmp): New class. (class vmfeq): New class. (class vmfne): New class. (class vmflt): New class. (class vmfle): New class. (class vmfgt): New class. (class vmfge): New class. (class vfclass): New class. (class vfmerge): New class. (class vfmv): New class. (class vfcvt_f2i): New class. (class vfcvt_f2u): New class. (class vfcvt_rtz_f2i): New class. (class vfcvt_rtz_f2u): New class. (class vfcvt_i2f): New class. (class vfcvt_u2f): New class. (class vfwcvt_f2i): New class. (class vfwcvt_f2u): New class. (class vfwcvt_rtz_f2i): New class. (class vfwcvt_rtz_f2u): New class. (class vfwcvt_i2f): New class. (class vfwcvt_u2f): New class. (class vfwcvt_f2f): New class. (class vfncvt_f2i): New class. (class vfncvt_f2u): New class. (class vfncvt_rtz_f2i): New class. (class vfncvt_rtz_f2u): New class. (class vfncvt_i2f): New class. (class vfncvt_u2f): New class. (class vfncvt_f2f): New class. (class vfncvt_f2rodf): New class. (class vredsum): New class. (class vredmax): New class. (class vredmaxu): New class. (class vredmin): New class. (class vredminu): New class. (class vredand): New class. (class vredor): New class. (class vredxor): New class. (class vwredsum): New class. (class vwredsumu): New class. (class freduceop): New class. (class vfredosum): New class. (class vfredusum): New class. (class vfredmax): New class. (class vfredmin): New class. (class vfwredosum): New class. (class vfwredusum): New class. (class vmand): New class. (class vmor): New class. (class vmxor): New class. (class vmnand): New class. (class vmnor): New class. (class vmxnor): New class. (class vmandn): New class. (class vmorn): New class. (class vmmv): New class. (class vmnot): New class. (class vmclr): New class. (class vmset): New class. (class vcpop): New class. (class vfirst): New class. (class vmsbf): New class. (class vmsif): New class. (class vmsof): New class. (class viota): New class. (class vid): New class. (class vmv_x_s): New class. (class vmv_s_x): New class. (class vfmv_f_s): New class. (class vfmv_s_f): New class. (class vslideup): New class. (class vslidedown): New class. (class vslide1up): New class. (class vslide1down): New class. (class vfslide1up): New class. (class vfslide1down): New class. (class vrgather): New class. (class vrgatherei16): New class. (class vcompress): New class. * config/riscv/riscv-vector.cc (rvv_adjust_frame): Change to static function. (enum GEN_CLASS): Change to static function. (modify_operands): Change to static function. (emit_op5_vmv_s_x): Change to static function. (emit_op5): Change to static function. (emit_op7_slide1): Change to static function. (emit_op7): Change to static function. * config/riscv/vector-iterators.md: Fix iterstors. * config/riscv/vector.md (@v_s_x): New pattern. (@vslide1_vx): New pattern. (vmv_vlx2_help): New pattern. (@vfmv_v_f): New pattern. (@v_vv): New pattern. (@vmclr_m): New pattern. (@vsssub_vv): New pattern. (@vussub_vv): New pattern. (@vmset_m): New pattern. (@v_vx_internal): New pattern. (@v_vx_32bit): New pattern. (@vsssub_vx_internal): New pattern. (@vussub_vx_internal): New pattern. (@vsssub_vx_32bit): New pattern. (@vussub_vx_32bit): New pattern. (@v_vv): New pattern. (@v_vx_internal): New pattern. (@v_vx_32bit): New pattern. (@v_vv): New pattern. (@v_vx): New pattern. (@vn_wv): New pattern. (@vn_wx): New pattern. (@vf_vv): New pattern. (@vf_vf): New pattern. (@vfr_vf): New pattern. (@vfw_vv): New pattern. (@vfw_vf): New pattern. (@vfw_wv): New pattern. (@vfw_wf): New pattern. (@vfwmul_vv): New pattern. (@vfwmul_vf): New pattern. (@vf_vv): New pattern. (@vf_vf): New pattern. (@vfwmacc_vv): New pattern. (@vfwmsac_vv): New pattern. (@vfwmacc_vf): New pattern. (@vfwmsac_vf): New pattern. (@vfwnmacc_vv): New pattern. (@vfwnmsac_vv): New pattern. (@vfwnmacc_vf): New pattern. (@vfwnmsac_vf): New pattern. (@vfsqrt_v): New pattern. (@vf_v): New pattern. (@vfsgnj_vv): New pattern. (@vfsgnj_vf): New pattern. (@vfneg_v): New pattern. (@vfabs_v): New pattern. (@vmf_vv): New pattern. (@vmf_vf): New pattern. (@vfclass_v): New pattern. (@vfmerge_vfm): New pattern. (@vfcvt_x_f_v): New pattern. (@vfcvt_rtz_x_f_v): New pattern. (@vfcvt_f_x_v): New pattern. (@vfwcvt_x_f_v): New pattern. (@vfwcvt_rtz_x_f_v): New pattern. (@vfwcvt_f_x_v): New pattern. (@vfwcvt_f_f_v): New pattern. (@vfncvt_x_f_w): New pattern. (@vfncvt_rtz_x_f_w): New pattern. (@vfncvt_f_x_w): New pattern. (@vfncvt_f_f_w): New pattern. (@vfncvt_rod_f_f_w): New pattern. (@vred_vs): New pattern. (@vwredsum_vs): New pattern. (@vfred_vs): New pattern. (@vfwredusum_vs): New pattern. (@vfwredosum_vs): New pattern. (@vm_mm): New pattern. (@vmn_mm): New pattern. (@vmnot_mm): New pattern. (@vmmv_m): New pattern. (@vmnot_m): New pattern. (@vcpop__m): New pattern. (@vfirst__m): New pattern. (@vm_m): New pattern. (@viota_m): New pattern. (@vid_v): New pattern. (@vmv_x_s): New pattern. (vmv_x_s_di_internal): New pattern. (vmv_x_s_lo): New pattern. (vmv_x_s_hi): New pattern. (@vmv_s_x_internal): New pattern. (@vmv_s_x_32bit): New pattern. (@vfmv_f_s): New pattern. (@vfmv_s_f): New pattern. (@vslide_vx): New pattern. (@vslide1_vx_internal): New pattern. (@vslide1_vx_32bit): New pattern. (@vfslide1_vf): New pattern. (@vrgather_vv): New pattern. (@vrgatherei16_vv): New pattern. (@vrgather_vx): New pattern. (@vcompress_vm): New pattern. --- .../riscv/riscv-vector-builtins-functions.cc | 1703 +++++++++++ .../riscv/riscv-vector-builtins-functions.def | 148 + .../riscv/riscv-vector-builtins-functions.h | 1367 +++++++++ gcc/config/riscv/riscv-vector.cc | 145 +- gcc/config/riscv/vector-iterators.md | 3 + gcc/config/riscv/vector.md | 2508 ++++++++++++++++- 6 files changed, 5830 insertions(+), 44 deletions(-) diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.cc b/gcc/config/riscv/riscv-vector-builtins-functions.cc index 6e0fd0b3570..fe3a477347e 100644 --- a/gcc/config/riscv/riscv-vector-builtins-functions.cc +++ b/gcc/config/riscv/riscv-vector-builtins-functions.cc @@ -3063,6 +3063,1709 @@ vmv::expand (const function_instance &instance, tree exp, rtx target) const return expand_builtin_insn (icode, exp, target, instance); } +/* A function implementation for reduction functions. */ +char * +reduceop::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0, 1); + append_name (instance.get_base_name ()); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +void +reduceop::get_argument_types (const function_instance &instance, + vec &argument_types) const +{ + for (unsigned int i = 1; i < instance.get_arg_pattern ().arg_len; i++) + argument_types.quick_push (get_dt_t_with_index (instance, i)); +} + +tree +reduceop::get_mask_type (tree, const function_instance &, + const vec &argument_types) const +{ + machine_mode mask_mode; + gcc_assert (rvv_get_mask_mode (TYPE_MODE (argument_types[0])).exists (&mask_mode)); + return mode2mask_t (mask_mode); +} + +/* A function implementation for vsadd functions. */ +rtx +vsadd::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_v_vv (SS_PLUS, mode); + else + icode = code_for_v_vx (UNSPEC_VSADD, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vsaddu functions. */ +rtx +vsaddu::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_v_vv (US_PLUS, mode); + else + icode = code_for_v_vx (UNSPEC_VSADDU, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vaadd functions. */ +rtx +vaadd::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_v_vv (UNSPEC_AADD, mode); + else + icode = code_for_v_vx (UNSPEC_VAADD, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vaaddu functions. */ +rtx +vaaddu::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_v_vv (UNSPEC_AADDU, mode); + else + icode = code_for_v_vx (UNSPEC_VAADDU, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vssub functions. */ +rtx +vssub::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vsssub_vv (mode); + else + icode = code_for_v_vx (UNSPEC_VSSUB, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vssubu functions. */ +rtx +vssubu::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vussub_vv (mode); + else + icode = code_for_v_vx (UNSPEC_VSSUBU, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vasub functions. */ +rtx +vasub::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_v_vv (UNSPEC_ASUB, mode); + else + icode = code_for_v_vx (UNSPEC_VASUB, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vasubu functions. */ +rtx +vasubu::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_v_vv (UNSPEC_ASUBU, mode); + else + icode = code_for_v_vx (UNSPEC_VASUBU, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vssrl functions. */ +rtx +vssrl::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_v_vv (UNSPEC_SSRL, mode); + else + icode = code_for_v_vx (UNSPEC_SSRL, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vssra functions. */ +rtx +vssra::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_v_vv (UNSPEC_SSRA, mode); + else + icode = code_for_v_vx (UNSPEC_SSRA, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vsmul functions. */ +rtx +vsmul::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_v_vv (UNSPEC_SMUL, mode); + else + icode = code_for_v_vx (UNSPEC_VSMUL, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vnclip functions. */ +rtx +vnclip::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_wv) + icode = code_for_vn_wv (UNSPEC_SIGNED_CLIP, mode); + else + icode = code_for_vn_wx (UNSPEC_SIGNED_CLIP, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vnclipu functions. */ +rtx +vnclipu::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_wv) + icode = code_for_vn_wv (UNSPEC_UNSIGNED_CLIP, mode); + else + icode = code_for_vn_wx (UNSPEC_UNSIGNED_CLIP, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for funop functions. */ +unsigned int +funop::call_properties () const +{ + return CP_RAISE_FP_EXCEPTIONS; +} + +/* A function implementation for fbinop functions. */ +unsigned int +fbinop::call_properties () const +{ + return CP_RAISE_FP_EXCEPTIONS; +} + +/* A function implementation for fwbinop functions. */ +unsigned int +fwbinop::call_properties () const +{ + return CP_RAISE_FP_EXCEPTIONS; +} + +/* A function implementation for fternop functions. */ +unsigned int +fternop::call_properties () const +{ + return CP_RAISE_FP_EXCEPTIONS; +} + +/* A function implementation for vfadd functions. */ +rtx +vfadd::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (PLUS, mode); + else + icode = code_for_vf_vf (PLUS, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfsub functions. */ +rtx +vfsub::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (MINUS, mode); + else + icode = code_for_vf_vf (MINUS, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfmul functions. */ +rtx +vfmul::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (MULT, mode); + else + icode = code_for_vf_vf (MULT, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfdiv functions. */ +rtx +vfdiv::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (DIV, mode); + else + icode = code_for_vf_vf (DIV, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfrsub and vfrdiv functions. */ +rtx +vfrsub::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vfr_vf (MINUS, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfrdiv functions. */ +rtx +vfrdiv::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vfr_vf (DIV, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfneg functions. */ +rtx +vfneg::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vfneg_v (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwadd functions. */ +rtx +vfwadd::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[2]; + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vfw_vv (PLUS, mode); + else if (instance.get_operation () == OP_vf) + icode = code_for_vfw_vf (PLUS, mode); + else if (instance.get_operation () == OP_wv) + icode = code_for_vfw_wv (PLUS, mode); + else + icode = code_for_vfw_wf (PLUS, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwsub functions. */ +rtx +vfwsub::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[2]; + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vfw_vv (MINUS, mode); + else if (instance.get_operation () == OP_vf) + icode = code_for_vfw_vf (MINUS, mode); + else if (instance.get_operation () == OP_wv) + icode = code_for_vfw_wv (MINUS, mode); + else + icode = code_for_vfw_wf (MINUS, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwmul functions. */ +rtx +vfwmul::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vfwmul_vv (mode); + else + icode = code_for_vfwmul_vf (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfmacc functions. */ +rtx +vfmacc::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (UNSPEC_MACC, mode); + else + icode = code_for_vf_vf (UNSPEC_MACC, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfmsac functions. */ +rtx +vfmsac::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (UNSPEC_MSAC, mode); + else + icode = code_for_vf_vf (UNSPEC_MSAC, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfnmacc functions. */ +rtx +vfnmacc::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (UNSPEC_NMACC, mode); + else + icode = code_for_vf_vf (UNSPEC_NMACC, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfnmsac functions. */ +rtx +vfnmsac::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (UNSPEC_NMSAC, mode); + else + icode = code_for_vf_vf (UNSPEC_NMSAC, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfmadd functions. */ +rtx +vfmadd::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (UNSPEC_MADD, mode); + else + icode = code_for_vf_vf (UNSPEC_MADD, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfnmadd functions. */ +rtx +vfnmadd::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (UNSPEC_NMADD, mode); + else + icode = code_for_vf_vf (UNSPEC_NMADD, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfmsub functions. */ +rtx +vfmsub::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (UNSPEC_MSUB, mode); + else + icode = code_for_vf_vf (UNSPEC_MSUB, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfnmsub functions. */ +rtx +vfnmsub::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (UNSPEC_NMSUB, mode); + else + icode = code_for_vf_vf (UNSPEC_NMSUB, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwmacc functions. */ +rtx +vfwmacc::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[2]; + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vfwmacc_vv (mode); + else + icode = code_for_vfwmacc_vf (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwnmacc functions. */ +rtx +vfwnmacc::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[2]; + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vfwnmacc_vv (mode); + else + icode = code_for_vfwnmacc_vf (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwmsac functions. */ +rtx +vfwmsac::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[2]; + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vfwmsac_vv (mode); + else + icode = code_for_vfwmsac_vf (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwnmsac functions. */ +rtx +vfwnmsac::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[2]; + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vfwnmsac_vv (mode); + else + icode = code_for_vfwnmsac_vf (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfsqrt functions. */ +rtx +vfsqrt::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vfsqrt_v (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfrsqrt7 functions. */ +rtx +vfrsqrt7::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vf_v (UNSPEC_RSQRT7, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfrec7 functions. */ +rtx +vfrec7::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vf_v (UNSPEC_REC7, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfmax functions. */ +rtx +vfmax::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (SMAX, mode); + else + icode = code_for_vf_vf (SMAX, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfmin functions. */ +rtx +vfmin::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vf_vv (SMIN, mode); + else + icode = code_for_vf_vf (SMIN, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfsgnj functions. */ +rtx +vfsgnj::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vfsgnj_vv (UNSPEC_COPYSIGN, mode); + else + icode = code_for_vfsgnj_vf (UNSPEC_COPYSIGN, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfsgnjn functions. */ +rtx +vfsgnjn::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vfsgnj_vv (UNSPEC_NCOPYSIGN, mode); + else + icode = code_for_vfsgnj_vf (UNSPEC_NCOPYSIGN, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfsgnjx functions. */ +rtx +vfsgnjx::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vfsgnj_vv (UNSPEC_XORSIGN, mode); + else + icode = code_for_vfsgnj_vf (UNSPEC_XORSIGN, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfabs functions. */ +rtx +vfabs::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vfabs_v (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfcmp functions. */ +char * +vfcmp::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0, 1); + append_name (instance.get_base_name ()); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +/* A function implementation for vmfeq functions. */ +rtx +vmfeq::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vmf_vv (EQ, mode); + else + icode = code_for_vmf_vf (EQ, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmfne functions. */ +rtx +vmfne::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vmf_vv (NE, mode); + else + icode = code_for_vmf_vf (NE, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmflt functions. */ +rtx +vmflt::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vmf_vv (LT, mode); + else + icode = code_for_vmf_vf (LT, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmfgt functions. */ +rtx +vmfgt::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vmf_vv (GT, mode); + else + icode = code_for_vmf_vf (GT, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmfle functions. */ +rtx +vmfle::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vmf_vv (LE, mode); + else + icode = code_for_vmf_vf (LE, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmfge functions. */ +rtx +vmfge::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vmf_vv (GE, mode); + else + icode = code_for_vmf_vf (GE, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfclass functions. */ +rtx +vfclass::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfclass_v (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfmerge functions. */ +size_t +vfmerge::get_position_of_dest_arg (enum predication_index) const +{ + return 1; +} + +rtx +vfmerge::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vfmerge_vfm (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfmv functions. */ +bool +vfmv::can_be_overloaded_p (const function_instance &instance) const +{ + if (instance.get_pred () == PRED_tu) + return true; + + return false; +} + +rtx +vfmv::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_v_f) + icode = code_for_vfmv_v_f (mode); + else + icode = code_for_vmv_v_v (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfcvt_x_f_v functions. */ +char * +vfcvt_f2i::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfcvt_x"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfcvt_f2i::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfcvt_x_f_v (mode, UNSPEC_FLOAT_TO_SIGNED_INT); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfcvt_xu_f_v functions. */ +char * +vfcvt_f2u::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfcvt_xu"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfcvt_f2u::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfcvt_x_f_v (mode, UNSPEC_FLOAT_TO_UNSIGNED_INT); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfcvt_rtz_x_f_v functions. */ +char * +vfcvt_rtz_f2i::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfcvt_rtz_x"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfcvt_rtz_f2i::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfcvt_rtz_x_f_v (mode, FIX); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfcvt_rtz_xu_f_v functions. */ +char * +vfcvt_rtz_f2u::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfcvt_rtz_xu"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfcvt_rtz_f2u::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfcvt_rtz_x_f_v (mode, UNSIGNED_FIX); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfcvt_f_x_v functions. */ +char * +vfcvt_i2f::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfcvt_f"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfcvt_i2f::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[0]; + enum insn_code icode = code_for_vfcvt_f_x_v (mode, FLOAT); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfcvt_f_xu_v functions. */ +char * +vfcvt_u2f::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfcvt_f"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfcvt_u2f::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[0]; + enum insn_code icode = code_for_vfcvt_f_x_v (mode, UNSIGNED_FLOAT); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwcvt_x_f_v functions. */ +char * +vfwcvt_f2i::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfwcvt_x"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfwcvt_f2i::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfwcvt_x_f_v (mode, UNSPEC_FLOAT_TO_SIGNED_INT); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwcvt_xu_f_v functions. */ +char * +vfwcvt_f2u::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfwcvt_xu"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfwcvt_f2u::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfwcvt_x_f_v (mode, UNSPEC_FLOAT_TO_UNSIGNED_INT); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwcvt_rtz_x_f_v functions. */ +char * +vfwcvt_rtz_f2i::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfwcvt_rtz_x"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfwcvt_rtz_f2i::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfwcvt_rtz_x_f_v (mode, FIX); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwcvt_rtz_xu_f_v functions. */ +char * +vfwcvt_rtz_f2u::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfwcvt_rtz_xu"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfwcvt_rtz_f2u::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfwcvt_rtz_x_f_v (mode, UNSIGNED_FIX); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwcvt_f_x_v functions. */ +char * +vfwcvt_i2f::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfwcvt_f"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfwcvt_i2f::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfwcvt_f_x_v (mode, FLOAT); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwcvt_f_xu_v functions. */ +char * +vfwcvt_u2f::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfwcvt_f"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfwcvt_u2f::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfwcvt_f_x_v (mode, UNSIGNED_FLOAT); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwcvt_f_f_v functions. */ +char * +vfwcvt_f2f::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfwcvt_f"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfwcvt_f2f::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfwcvt_f_f_v (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfncvt_x_f_w functions. */ +char * +vfncvt_f2i::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfncvt_x"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfncvt_f2i::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[0]; + enum insn_code icode = code_for_vfncvt_x_f_w (mode, UNSPEC_FLOAT_TO_SIGNED_INT); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfncvt_xu_f_w functions. */ +char * +vfncvt_f2u::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfncvt_xu"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfncvt_f2u::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[0]; + enum insn_code icode = code_for_vfncvt_x_f_w (mode, UNSPEC_FLOAT_TO_UNSIGNED_INT); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfncvt_rtz_x_f_w functions. */ +char * +vfncvt_rtz_f2i::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfncvt_rtz_x"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfncvt_rtz_f2i::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[0]; + enum insn_code icode = code_for_vfncvt_rtz_x_f_w (mode, FIX); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfncvt_rtz_xu_f_w functions. */ +char * +vfncvt_rtz_f2u::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfncvt_rtz_xu"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfncvt_rtz_f2u::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[0]; + enum insn_code icode = code_for_vfncvt_rtz_x_f_w (mode, UNSIGNED_FIX); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfncvt_f_x_w functions. */ +char * +vfncvt_i2f::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfncvt_f"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfncvt_i2f::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[0]; + enum insn_code icode = code_for_vfncvt_f_x_w (mode, FLOAT); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfncvt_f_xu_w functions. */ +char * +vfncvt_u2f::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfncvt_f"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfncvt_u2f::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[0]; + enum insn_code icode = code_for_vfncvt_f_x_w (mode, UNSIGNED_FLOAT); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfncvt_f_f_w functions. */ +char * +vfncvt_f2f::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfncvt_f"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfncvt_f2f::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vfncvt_f_f_w (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfncvt_rod_f_f_w functions. */ +char * +vfncvt_f2rodf::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + append_name ("vfncvt_rod_f"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfncvt_f2rodf::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vfncvt_rod_f_f_w (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vredsum functions. */ +rtx +vredsum::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vred_vs (UNSPEC_REDUC_SUM, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vredmax functions. */ +rtx +vredmax::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vred_vs (UNSPEC_REDUC_MAX, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vredmaxu functions. */ +rtx +vredmaxu::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vred_vs (UNSPEC_REDUC_MAXU, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vredmin functions. */ +rtx +vredmin::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vred_vs (UNSPEC_REDUC_MIN, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vredminu functions. */ +rtx +vredminu::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vred_vs (UNSPEC_REDUC_MINU, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vredand functions. */ +rtx +vredand::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vred_vs (UNSPEC_REDUC_AND, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vredor functions. */ +rtx +vredor::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vred_vs (UNSPEC_REDUC_OR, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vredxor functions. */ +rtx +vredxor::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vred_vs (UNSPEC_REDUC_XOR, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vwredsum functions. */ +rtx +vwredsum::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vwredsum_vs (SIGN_EXTEND, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vwredsumu functions. */ +rtx +vwredsumu::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vwredsum_vs (ZERO_EXTEND, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for freduceop functions. */ +unsigned int +freduceop::call_properties () const +{ + return CP_RAISE_FP_EXCEPTIONS; +} + +/* A function implementation for vfredosum functions. */ +rtx +vfredosum::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfred_vs (UNSPEC_REDUC_ORDERED_SUM, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfredusum functions. */ +rtx +vfredusum::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfred_vs (UNSPEC_REDUC_UNORDERED_SUM, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfredmax functions. */ +rtx +vfredmax::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfred_vs (UNSPEC_REDUC_MAX, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfredmin functions. */ +rtx +vfredmin::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfred_vs (UNSPEC_REDUC_MIN, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwredosum functions. */ +rtx +vfwredosum::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfwredosum_vs (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfwredusum functions. */ +rtx +vfwredusum::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfwredusum_vs (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmand functions. */ +rtx +vmand::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vm_mm (AND, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmor functions. */ +rtx +vmor::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vm_mm (IOR, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmxor functions. */ +rtx +vmxor::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vm_mm (XOR, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmnand functions. */ +rtx +vmnand::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vmn_mm (AND, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmnor functions. */ +rtx +vmnor::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vmn_mm (IOR, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmxnor functions. */ +rtx +vmxnor::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vmn_mm (XOR, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmlogicn functions. */ +rtx +vmandn::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vmnot_mm (AND, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmorn functions. */ +rtx +vmorn::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vmnot_mm (IOR, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmmv functions. */ +rtx +vmmv::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vmmv_m (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmnot functions. */ +rtx +vmnot::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vmnot_m (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmclr functions. */ +void +vmclr::get_argument_types (const function_instance &, + vec &) const +{ +} + +bool +vmclr::can_be_overloaded_p (const function_instance &) const +{ + return false; +} + +rtx +vmclr::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vmclr_m (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmset functions. */ +void +vmset::get_argument_types (const function_instance &, + vec &) const +{ +} + +bool +vmset::can_be_overloaded_p (const function_instance &) const +{ + return false; +} + +rtx +vmset::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vmset_m (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vcpop functions. */ +tree +vcpop::get_return_type (const function_instance &) const +{ + return long_unsigned_type_node; +} + +rtx +vcpop::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[0]; + enum insn_code icode = code_for_vcpop_m (mode, Pmode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfirst functions. */ +tree +vfirst::get_return_type (const function_instance &) const +{ + return long_integer_type_node; +} + +rtx +vfirst::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[0]; + enum insn_code icode = code_for_vfirst_m (mode, Pmode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmsbf functions. */ +rtx +vmsbf::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vm_m (UNSPEC_SBF, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmsif functions. */ +rtx +vmsif::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vm_m (UNSPEC_SIF, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmsof functions. */ +rtx +vmsof::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vm_m (UNSPEC_SOF, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for viota functions. */ +bool +viota::can_be_overloaded_p (const function_instance &instance) const +{ + if (instance.get_pred () == PRED_void || instance.get_pred () == PRED_ta || + instance.get_pred () == PRED_tama) + return false; + + return true; +} + +rtx +viota::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_viota_m (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vid functions. */ +void +vid::get_argument_types (const function_instance &, + vec &) const +{ +} + +bool +vid::can_be_overloaded_p (const function_instance &instance) const +{ + if (instance.get_pred () == PRED_void || instance.get_pred () == PRED_ta || + instance.get_pred () == PRED_tama) + return false; + + return true; +} + +rtx +vid::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vid_v (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmv_x_s functions. */ +char * +vmv_x_s::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0, 1); + append_name ("vmv_x"); + return finish_name (); +} + +rtx +vmv_x_s::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vmv_x_s (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vmv_s_x functions. */ +char * +vmv_s_x::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + if (instance.get_pred () == PRED_ta) + return nullptr; + append_name ("vmv_s"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vmv_s_x::expand (const function_instance &instance, tree exp, rtx target) const +{ + + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_v_s_x (UNSPEC_VMVS, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfmv_f_s functions. */ +char * +vfmv_f_s::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0, 1); + append_name ("vfmv_f"); + return finish_name (); +} + +rtx +vfmv_f_s::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = instance.get_arg_pattern ().arg_list[1]; + enum insn_code icode = code_for_vfmv_f_s (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfmv_s_f functions. */ +char * +vfmv_s_f::assemble_name (function_instance &instance) +{ + intrinsic_rename (instance, 0); + if (instance.get_pred () == PRED_ta) + return nullptr; + append_name ("vfmv_s"); + append_name (get_pred_str (instance.get_pred (), true)); + return finish_name (); +} + +rtx +vfmv_s_f::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vfmv_s_f (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vslideup functions. */ +rtx +vslideup::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vslide_vx (UNSPEC_SLIDEUP, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vslidedown functions. */ +rtx +vslidedown::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vslide_vx (UNSPEC_SLIDEDOWN, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vslide1up functions. */ +rtx +vslide1up::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vslide1_vx (UNSPEC_SLIDE1UP, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vslide1down functions. */ +rtx +vslide1down::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vslide1_vx (UNSPEC_SLIDE1DOWN, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfslide1up functions. */ +rtx +vfslide1up::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vfslide1_vf (UNSPEC_SLIDE1UP, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vfslide1down functions. */ +rtx +vfslide1down::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vfslide1_vf (UNSPEC_SLIDE1DOWN, mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vrgather functions. */ +rtx +vrgather::expand (const function_instance &instance, tree exp, rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode; + if (instance.get_operation () == OP_vv) + icode = code_for_vrgather_vv (mode); + else + icode = code_for_vrgather_vx (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vrgather functions. */ +rtx +vrgatherei16::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vrgatherei16_vv (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + +/* A function implementation for vcompress functions. */ +size_t +vcompress::get_position_of_dest_arg (enum predication_index) const +{ + return 1; +} + +rtx +vcompress::expand (const function_instance &instance, tree exp, + rtx target) const +{ + machine_mode mode = TYPE_MODE (TREE_TYPE (exp)); + enum insn_code icode = code_for_vcompress_vm (mode); + return expand_builtin_insn (icode, exp, target, instance); +} + } // end namespace riscv_vector using namespace riscv_vector; diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.def b/gcc/config/riscv/riscv-vector-builtins-functions.def index bf9d42e6d67..efe89b3e270 100644 --- a/gcc/config/riscv/riscv-vector-builtins-functions.def +++ b/gcc/config/riscv/riscv-vector-builtins-functions.def @@ -237,6 +237,154 @@ DEF_RVV_FUNCTION(vmerge, vmerge, (3, VITER(VF, signed), VATTR(0, VF, signed), VA DEF_RVV_FUNCTION(vmv, vmv, (2, VITER(VI, signed), VATTR(0, VSUB, signed)), PAT_tail, pred_tail, OP_v_v | OP_v_x) DEF_RVV_FUNCTION(vmv, vmv, (2, VITER(VI, unsigned), VATTR(0, VSUB, unsigned)), PAT_tail, pred_tail, OP_v_v | OP_v_x) DEF_RVV_FUNCTION(vmv, vmv, (2, VITER(VF, signed), VATTR(0, VSUB, signed)), PAT_tail, pred_tail, OP_v_v) +/* 12. Vector Fixed-Point Arithmetic Instructions. */ +DEF_RVV_FUNCTION(vsadd, vsadd, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vsaddu, vsaddu, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vaadd, vaadd, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vaaddu, vaaddu, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vssub, vssub, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vssubu, vssubu, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vasub, vasub, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vasubu, vasubu, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vsmul, vsmul, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, signed)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vssrl, vssrl, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vssra, vssra, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VI, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vnclip, vnclip, (3, VITER(VWI, signed), VATTR(0, VW, signed), VATTR(0, VWI, unsigned)), pat_mask_tail, pred_all, OP_wv | OP_wx) +DEF_RVV_FUNCTION(vnclipu, vnclipu, (3, VITER(VWI, unsigned), VATTR(0, VW, unsigned), VATTR(0, VWI, unsigned)), pat_mask_tail, pred_all, OP_wv | OP_wx) +/* 13. Vector Floating-Point Arithmetic Instructions. */ +DEF_RVV_FUNCTION(vfadd, vfadd, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfsub, vfsub, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfmul, vfmul, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfdiv, vfdiv, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfrsub, vfrsub, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_vf) +DEF_RVV_FUNCTION(vfrdiv, vfrdiv, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_vf) +DEF_RVV_FUNCTION(vfneg, vfneg, (2, VITER(VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_v) +DEF_RVV_FUNCTION(vfwadd, vfwadd, (3, VATTR(1, VW, signed), VITER(VWF, signed), VATTR(1, VWF, signed)), pat_mask_tail, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfwsub, vfwsub, (3, VATTR(1, VW, signed), VITER(VWF, signed), VATTR(1, VWF, signed)), pat_mask_tail, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfwadd, vfwadd, (3, VATTR(2, VW, signed), VATTR(2, VW, signed), VITER(VWF, signed)), pat_mask_tail, pred_all, OP_wv | OP_wf) +DEF_RVV_FUNCTION(vfwsub, vfwsub, (3, VATTR(2, VW, signed), VATTR(2, VW, signed), VITER(VWF, signed)), pat_mask_tail, pred_all, OP_wv | OP_wf) +DEF_RVV_FUNCTION(vfwmul, vfwmul, (3, VATTR(1, VW, signed), VITER(VWF, signed), VATTR(1, VWF, signed)), pat_mask_tail, pred_all, OP_vv | OP_vf | OP_wv | OP_wf) +DEF_RVV_FUNCTION(vfmacc, vfmacc, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail_dest, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfmsac, vfmsac, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail_dest, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfnmacc, vfnmacc, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail_dest, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfnmsac, vfnmsac, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail_dest, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfmadd, vfmadd, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail_dest, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfnmadd, vfnmadd, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail_dest, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfmsub, vfmsub, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail_dest, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfnmsub, vfnmsub, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail_dest, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfwmacc, vfwmacc, (3, VATTR(1, VW, signed), VITER(VWF, signed), VATTR(1, VWF, signed)), pat_mask_tail_dest, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfwmsac, vfwmsac, (3, VATTR(1, VW, signed), VITER(VWF, signed), VATTR(1, VWF, signed)), pat_mask_tail_dest, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfwnmacc, vfwnmacc, (3, VATTR(1, VW, signed), VITER(VWF, signed), VATTR(1, VWF, signed)), pat_mask_tail_dest, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfwnmsac, vfwnmsac, (3, VATTR(1, VW, signed), VITER(VWF, signed), VATTR(1, VWF, signed)), pat_mask_tail_dest, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfsqrt, vfsqrt, (2, VITER(VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_v) +DEF_RVV_FUNCTION(vfrsqrt7, vfrsqrt7, (2, VITER(VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_v) +DEF_RVV_FUNCTION(vfrec7, vfrec7, (2, VITER(VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_v) +DEF_RVV_FUNCTION(vfmax, vfmax, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfmin, vfmin, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfsgnj, vfsgnj, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfsgnjn, vfsgnjn, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfsgnjx, vfsgnjx, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfabs, vfabs, (2, VITER(VF, signed), VATTR(0, VF, signed)), pat_mask_tail, pred_all, OP_v) +DEF_RVV_FUNCTION(vmfeq, vmfeq, (3, VATTR(1, VM, signed), VITER(VF, signed), VATTR(1, VF, signed)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vmfne, vmfne, (3, VATTR(1, VM, signed), VITER(VF, signed), VATTR(1, VF, signed)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vmflt, vmflt, (3, VATTR(1, VM, signed), VITER(VF, signed), VATTR(1, VF, signed)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vmfle, vmfle, (3, VATTR(1, VM, signed), VITER(VF, signed), VATTR(1, VF, signed)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vmfgt, vmfgt, (3, VATTR(1, VM, signed), VITER(VF, signed), VATTR(1, VF, signed)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vmfge, vmfge, (3, VATTR(1, VM, signed), VITER(VF, signed), VATTR(1, VF, signed)), pat_mask_ignore_tp, pred_mask, OP_vv | OP_vf) +DEF_RVV_FUNCTION(vfclass, vfclass, (2, VATTR(1, VMAP, unsigned), VITER(VF, signed)), pat_mask_tail, pred_all, OP_v) +DEF_RVV_FUNCTION(vfmerge, vfmerge, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VF, signed)), PAT_tail | PAT_merge, pred_tail, OP_vfm) +DEF_RVV_FUNCTION(vfmv, vfmv, (2, VITER(VF, signed), VATTR(0, VSUB, signed)), PAT_tail, pred_tail, OP_v_f) +DEF_RVV_FUNCTION(vfcvt_x_f_v, vfcvt_f2i, (2, VATTR(1, VMAP, signed), VITER(VF, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfcvt_xu_f_v, vfcvt_f2u, (2, VATTR(1, VMAP, unsigned), VITER(VF, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfcvt_rtz_x_f_v, vfcvt_rtz_f2i, (2, VATTR(1, VMAP, signed), VITER(VF, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfcvt_rtz_xu_f_v, vfcvt_rtz_f2u, (2, VATTR(1, VMAP, unsigned), VITER(VF, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfcvt_f_x_v, vfcvt_i2f, (2, VITER(VF, signed), VATTR(0, VMAP, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfcvt_f_xu_v, vfcvt_u2f, (2, VITER(VF, signed), VATTR(0, VMAP, unsigned)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfwcvt_x_f_v, vfwcvt_f2i, (2, VATTR(1, VWMAP, signed), VITER(VWF, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfwcvt_xu_f_v, vfwcvt_f2u, (2, VATTR(1, VWMAP, unsigned), VITER(VWF, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfwcvt_rtz_x_f_v, vfwcvt_rtz_f2i, (2, VATTR(1, VWMAP, signed), VITER(VWF, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfwcvt_rtz_xu_f_v, vfwcvt_rtz_f2u, (2, VATTR(1, VWMAP, unsigned), VITER(VWF, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfwcvt_f_x_v, vfwcvt_i2f, (2, VATTR(1, VWFMAP, signed), VITER(VWINOQI, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfwcvt_f_xu_v, vfwcvt_u2f, (2, VATTR(1, VWFMAP, signed), VITER(VWINOQI, unsigned)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfwcvt_f_f_v, vfwcvt_f2f, (2, VATTR(1, VW, signed), VITER(VWF, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfncvt_x_f_w, vfncvt_f2i, (2, VITER(VWINOQI, signed), VATTR(0, VWFMAP, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfncvt_xu_f_w, vfncvt_f2u, (2, VITER(VWINOQI, unsigned), VATTR(0, VWFMAP, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfncvt_rtz_x_f_w, vfncvt_rtz_f2i, (2, VITER(VWINOQI, signed), VATTR(0, VWFMAP, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfncvt_rtz_xu_f_w, vfncvt_rtz_f2u, (2, VITER(VWINOQI, unsigned), VATTR(0, VWFMAP, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfncvt_f_x_w, vfncvt_i2f, (2, VITER(VWF, signed), VATTR(0, VWMAP, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfncvt_f_xu_w, vfncvt_u2f, (2, VITER(VWF, signed), VATTR(0, VWMAP, unsigned)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfncvt_f_f_w, vfncvt_f2f, (2, VITER(VWF, signed), VATTR(0, VW, signed)), pat_mask_tail, pred_all, OP_none) +DEF_RVV_FUNCTION(vfncvt_rod_f_f_w, vfncvt_f2rodf, (2, VITER(VWF, signed), VATTR(0, VW, signed)), pat_mask_tail, pred_all, OP_none) +/* 14. Vector Reduction Operations. */ +DEF_RVV_FUNCTION(vredsum, vredsum, (3, VATTR(1, VLMUL1, signed), VITER(VI, signed), VATTR(1, VLMUL1, signed)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vredsum, vredsum, (3, VATTR(1, VLMUL1, unsigned), VITER(VI, unsigned), VATTR(1, VLMUL1, unsigned)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vredmax, vredmax, (3, VATTR(1, VLMUL1, signed), VITER(VI, signed), VATTR(1, VLMUL1, signed)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vredmaxu, vredmaxu, (3, VATTR(1, VLMUL1, unsigned), VITER(VI, unsigned), VATTR(1, VLMUL1, unsigned)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vredmin, vredmin, (3, VATTR(1, VLMUL1, signed), VITER(VI, signed), VATTR(1, VLMUL1, signed)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vredminu, vredminu, (3, VATTR(1, VLMUL1, unsigned), VITER(VI, unsigned), VATTR(1, VLMUL1, unsigned)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vredand, vredand, (3, VATTR(1, VLMUL1, signed), VITER(VI, signed), VATTR(1, VLMUL1, signed)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vredand, vredand, (3, VATTR(1, VLMUL1, unsigned), VITER(VI, unsigned), VATTR(1, VLMUL1, unsigned)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vredor, vredor, (3, VATTR(1, VLMUL1, signed), VITER(VI, signed), VATTR(1, VLMUL1, signed)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vredor, vredor, (3, VATTR(1, VLMUL1, unsigned), VITER(VI, unsigned), VATTR(1, VLMUL1, unsigned)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vredxor, vredxor, (3, VATTR(1, VLMUL1, signed), VITER(VI, signed), VATTR(1, VLMUL1, signed)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vredxor, vredxor, (3, VATTR(1, VLMUL1, unsigned), VITER(VI, unsigned), VATTR(1, VLMUL1, unsigned)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vwredsum, vwredsum, (3, VATTR(1, VWLMUL1, signed), VITER(VWREDI, signed), VATTR(1, VWLMUL1, signed)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vwredsumu, vwredsumu, (3, VATTR(1, VWLMUL1, unsigned), VITER(VWREDI, unsigned), VATTR(1, VWLMUL1, unsigned)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vfredosum, vfredosum, (3, VATTR(1, VLMUL1, signed), VITER(VF, signed), VATTR(1, VLMUL1, signed)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vfredusum, vfredusum, (3, VATTR(1, VLMUL1, signed), VITER(VF, signed), VATTR(1, VLMUL1, signed)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vfredmax, vfredmax, (3, VATTR(1, VLMUL1, signed), VITER(VF, signed), VATTR(1, VLMUL1, signed)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vfredmin, vfredmin, (3, VATTR(1, VLMUL1, signed), VITER(VF, signed), VATTR(1, VLMUL1, signed)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vfwredosum, vfwredosum, (3, VATTR(1, VWLMUL1, signed), VITER(VWREDF, signed), VATTR(1, VWLMUL1, signed)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +DEF_RVV_FUNCTION(vfwredusum, vfwredusum, (3, VATTR(1, VWLMUL1, signed), VITER(VWREDF, signed), VATTR(1, VWLMUL1, signed)), pat_void_dest_ignore_mp, pred_reduce, OP_vs) +/* 15. Vector Mask Instructions. */ +DEF_RVV_FUNCTION(vmand, vmand, (3, VITER(VB, signed), VATTR(0, VB, signed), VATTR(0, VB, signed)), PAT_none, PRED_void, OP_mm) +DEF_RVV_FUNCTION(vmor, vmor, (3, VITER(VB, signed), VATTR(0, VB, signed), VATTR(0, VB, signed)), PAT_none, PRED_void, OP_mm) +DEF_RVV_FUNCTION(vmxor, vmxor, (3, VITER(VB, signed), VATTR(0, VB, signed), VATTR(0, VB, signed)), PAT_none, PRED_void, OP_mm) +DEF_RVV_FUNCTION(vmnand, vmnand, (3, VITER(VB, signed), VATTR(0, VB, signed), VATTR(0, VB, signed)), PAT_none, PRED_void, OP_mm) +DEF_RVV_FUNCTION(vmnor, vmnor, (3, VITER(VB, signed), VATTR(0, VB, signed), VATTR(0, VB, signed)), PAT_none, PRED_void, OP_mm) +DEF_RVV_FUNCTION(vmxnor, vmxnor, (3, VITER(VB, signed), VATTR(0, VB, signed), VATTR(0, VB, signed)), PAT_none, PRED_void, OP_mm) +DEF_RVV_FUNCTION(vmandn, vmandn, (3, VITER(VB, signed), VATTR(0, VB, signed), VATTR(0, VB, signed)), PAT_none, PRED_void, OP_mm) +DEF_RVV_FUNCTION(vmorn, vmorn, (3, VITER(VB, signed), VATTR(0, VB, signed), VATTR(0, VB, signed)), PAT_none, PRED_void, OP_mm) +DEF_RVV_FUNCTION(vmmv, vmmv, (2, VITER(VB, signed), VATTR(0, VB, signed)), PAT_none, PRED_void, OP_m) +DEF_RVV_FUNCTION(vmnot, vmnot, (2, VITER(VB, signed), VATTR(0, VB, signed)), PAT_none, PRED_void, OP_m) +DEF_RVV_FUNCTION(vmclr, vmclr, (1, VITER(VB, signed)), PAT_none, PRED_void, OP_m) +DEF_RVV_FUNCTION(vmset, vmset, (1, VITER(VB, signed)), PAT_none, PRED_void, OP_m) +DEF_RVV_FUNCTION(vcpop, vcpop, (2, VITER(VB, signed), VATTR(0, VB, signed)), pat_mask_ignore_policy, pred_mask2, OP_m) +DEF_RVV_FUNCTION(vfirst, vfirst, (2, VITER(VB, signed), VATTR(0, VB, signed)), pat_mask_ignore_policy, pred_mask2, OP_m) +DEF_RVV_FUNCTION(vmsbf, vmsbf, (2, VITER(VB, signed), VATTR(0, VB, signed)), pat_mask_ignore_tp, pred_mask, OP_m) +DEF_RVV_FUNCTION(vmsif, vmsif, (2, VITER(VB, signed), VATTR(0, VB, signed)), pat_mask_ignore_tp, pred_mask, OP_m) +DEF_RVV_FUNCTION(vmsof, vmsof, (2, VITER(VB, signed), VATTR(0, VB, signed)), pat_mask_ignore_tp, pred_mask, OP_m) +DEF_RVV_FUNCTION(viota, viota, (2, VITER(VI, unsigned), VATTR(0, VM, signed)), pat_mask_tail, pred_all, OP_m) +DEF_RVV_FUNCTION(vid, vid, (1, VITER(VI, signed)), pat_mask_tail, pred_all, OP_v) +DEF_RVV_FUNCTION(vid, vid, (1, VITER(VI, unsigned)), pat_mask_tail, pred_all, OP_v) +/* 16. Vector Permutation Instructions. */ +DEF_RVV_FUNCTION(vmv_x_s, vmv_x_s, (2, VATTR(1, VSUB, signed), VITER(VI, signed)), PAT_none, PRED_none, OP_none) +DEF_RVV_FUNCTION(vmv_x_s, vmv_x_s, (2, VATTR(1, VSUB, unsigned), VITER(VI, unsigned)), PAT_none, PRED_none, OP_none) +DEF_RVV_FUNCTION(vmv_s_x, vmv_s_x, (2, VITER(VI, signed), VATTR(0, VSUB, signed)), pat_tail_void_dest, pred_tail, OP_none) +DEF_RVV_FUNCTION(vmv_s_x, vmv_s_x, (2, VITER(VI, unsigned), VATTR(0, VSUB, unsigned)), pat_tail_void_dest, pred_tail, OP_none) +DEF_RVV_FUNCTION(vfmv_f_s, vfmv_f_s, (2, VATTR(1, VSUB, signed), VITER(VF, signed)), PAT_none, PRED_none, OP_none) +DEF_RVV_FUNCTION(vfmv_s_f, vfmv_s_f, (2, VITER(VF, signed), VATTR(0, VSUB, signed)), pat_tail_void_dest, pred_tail, OP_none) +DEF_RVV_FUNCTION(vslideup, vslideup, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VSUB, signed)), pat_mask_tail_dest, pred_all, OP_vx) +DEF_RVV_FUNCTION(vslideup, vslideup, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VSUB, unsigned)), pat_mask_tail_dest, pred_all, OP_vx) +DEF_RVV_FUNCTION(vslideup, vslideup, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VSUB, signed)), pat_mask_tail_dest, pred_all, OP_vx) +DEF_RVV_FUNCTION(vslidedown, vslidedown, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VSUB, signed)), pat_mask_tail_dest, pred_all, OP_vx) +DEF_RVV_FUNCTION(vslidedown, vslidedown, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VSUB, unsigned)), pat_mask_tail_dest, pred_all, OP_vx) +DEF_RVV_FUNCTION(vslidedown, vslidedown, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VSUB, signed)), pat_mask_tail_dest, pred_all, OP_vx) +DEF_RVV_FUNCTION(vslide1up, vslide1up, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VSUB, signed)), pat_mask_tail, pred_all, OP_vx) +DEF_RVV_FUNCTION(vslide1up, vslide1up, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VSUB, unsigned)), pat_mask_tail, pred_all, OP_vx) +DEF_RVV_FUNCTION(vslide1down, vslide1down, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VSUB, signed)), pat_mask_tail, pred_all, OP_vx) +DEF_RVV_FUNCTION(vslide1down, vslide1down, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VSUB, unsigned)), pat_mask_tail, pred_all, OP_vx) +DEF_RVV_FUNCTION(vfslide1up, vfslide1up, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VSUB, signed)), pat_mask_tail, pred_all, OP_vf) +DEF_RVV_FUNCTION(vfslide1down, vfslide1down, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VSUB, signed)), pat_mask_tail, pred_all, OP_vf) +DEF_RVV_FUNCTION(vrgather, vrgather, (3, VITER(VI, signed), VATTR(0, VI, signed), VATTR(0, VMAP, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vrgather, vrgather, (3, VITER(VI, unsigned), VATTR(0, VI, unsigned), VATTR(0, VMAP, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vrgather, vrgather, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VMAP, unsigned)), pat_mask_tail, pred_all, OP_vv | OP_vx) +DEF_RVV_FUNCTION(vrgatherei16, vrgatherei16, (3, VITER(VI16, signed), VATTR(0, VI16, signed), VATTR(0, VMAPI16, unsigned)), pat_mask_tail, pred_all, OP_vv) +DEF_RVV_FUNCTION(vrgatherei16, vrgatherei16, (3, VITER(VI16, unsigned), VATTR(0, VI16, unsigned), VATTR(0, VMAPI16, unsigned)), pat_mask_tail, pred_all, OP_vv) +DEF_RVV_FUNCTION(vrgatherei16, vrgatherei16, (3, VITER(VF, signed), VATTR(0, VF, signed), VATTR(0, VMAPI16, unsigned)), pat_mask_tail, pred_all, OP_vv) +DEF_RVV_FUNCTION(vcompress, vcompress, (2, VITER(VI, signed), VATTR(0, VI, signed)), PAT_tail | PAT_void_dest | PAT_merge, pred_tail, OP_vm) +DEF_RVV_FUNCTION(vcompress, vcompress, (2, VITER(VI, unsigned), VATTR(0, VI, unsigned)), PAT_tail | PAT_void_dest | PAT_merge, pred_tail, OP_vm) +DEF_RVV_FUNCTION(vcompress, vcompress, (2, VITER(VF, signed), VATTR(0, VF, signed)), PAT_tail | PAT_void_dest | PAT_merge, pred_tail, OP_vm) #undef REQUIRED_EXTENSIONS #undef DEF_RVV_FUNCTION #undef VITER diff --git a/gcc/config/riscv/riscv-vector-builtins-functions.h b/gcc/config/riscv/riscv-vector-builtins-functions.h index bde03e8d49d..85ed9d1ae26 100644 --- a/gcc/config/riscv/riscv-vector-builtins-functions.h +++ b/gcc/config/riscv/riscv-vector-builtins-functions.h @@ -1533,6 +1533,1373 @@ public: virtual rtx expand (const function_instance &, tree, rtx) const override; }; +/* A function_base for reduction functions. */ +class reduceop : public basic_alu +{ +public: + // use the same construction function as the basic_alu + using basic_alu::basic_alu; + + virtual char * assemble_name (function_instance &) override; + + virtual tree get_mask_type (tree, const function_instance &, const vec &) const override; + + virtual void get_argument_types (const function_instance &, vec &) const override; +}; + +/* A function_base for vsadd functions. */ +class vsadd : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vsaddu functions. */ +class vsaddu : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vaadd functions. */ +class vaadd : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vaaddu functions. */ +class vaaddu : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vssub functions. */ +class vssub : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vssubu functions. */ +class vssubu : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vasub functions. */ +class vasub : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vasubu functions. */ +class vasubu : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vssrl functions. */ +class vssrl : public vshift +{ +public: + // use the same construction function as the vshift + using vshift::vshift; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vssra functions. */ +class vssra : public vshift +{ +public: + // use the same construction function as the vshift + using vshift::vshift; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vsmul functions. */ +class vsmul : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vnclip functions. */ +class vnclip : public vshift +{ +public: + // use the same construction function as the vshift + using vshift::vshift; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vnclipu functions. */ +class vnclipu : public vshift +{ +public: + // use the same construction function as the vshift + using vshift::vshift; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for funop functions. */ +class funop : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual unsigned int call_properties () const override; +}; + +/* A function_base for fbinop functions. */ +class fbinop : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual unsigned int call_properties () const override; +}; + +/* A function_base for fbinop functions. */ +class fwbinop : public wbinop +{ +public: + // use the same construction function as the wbinop + using wbinop::wbinop; + + virtual unsigned int call_properties () const override; +}; + +/* A function_base for fternop functions. */ +class fternop : public ternop +{ +public: + // use the same construction function as the binop + using ternop::ternop; + + virtual unsigned int call_properties () const override; +}; + +/* A function_base for vfadd functions. */ +class vfadd : public fbinop +{ +public: + // use the same construction function as the fbinop + using fbinop::fbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfsub functions. */ +class vfsub : public fbinop +{ +public: + // use the same construction function as the fbinop + using fbinop::fbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfmul functions. */ +class vfmul : public fbinop +{ +public: + // use the same construction function as the fbinop + using fbinop::fbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfdiv functions. */ +class vfdiv : public fbinop +{ +public: + // use the same construction function as the fbinop + using fbinop::fbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfrsub functions. */ +class vfrsub : public fbinop +{ +public: + // use the same construction function as the binop + using fbinop::fbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfrdiv functions. */ +class vfrdiv : public fbinop +{ +public: + // use the same construction function as the binop + using fbinop::fbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfneg functions. */ +class vfneg : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwadd functions. */ +class vfwadd : public fwbinop +{ +public: + // use the same construction function as the fwbinop + using fwbinop::fwbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwsub functions. */ +class vfwsub : public fwbinop +{ +public: + // use the same construction function as the fwbinop + using fwbinop::fwbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwmul functions. */ +class vfwmul : public fbinop +{ +public: + // use the same construction function as the fbinop + using fbinop::fbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfmacc functions. */ +class vfmacc : public fternop +{ +public: + // use the same construction function as the fternop + using fternop::fternop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfmsac functions. */ +class vfmsac : public fternop +{ +public: + // use the same construction function as the fternop + using fternop::fternop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfnmacc functions. */ +class vfnmacc : public fternop +{ +public: + // use the same construction function as the fternop + using fternop::fternop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfnmsac functions. */ +class vfnmsac : public fternop +{ +public: + // use the same construction function as the fternop + using fternop::fternop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfmadd functions. */ +class vfmadd : public fternop +{ +public: + // use the same construction function as the fternop + using fternop::fternop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfnmadd functions. */ +class vfnmadd : public fternop +{ +public: + // use the same construction function as the fternop + using fternop::fternop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfmsub functions. */ +class vfmsub : public fternop +{ +public: + // use the same construction function as the fternop + using fternop::fternop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfnmsub functions. */ +class vfnmsub : public fternop +{ +public: + // use the same construction function as the fternop + using fternop::fternop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwmacc functions. */ +class vfwmacc : public fternop +{ +public: + // use the same construction function as the ternop + using fternop::fternop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwmsac functions. */ +class vfwmsac : public fternop +{ +public: + // use the same construction function as the ternop + using fternop::fternop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwnmacc functions. */ +class vfwnmacc : public fternop +{ +public: + // use the same construction function as the ternop + using fternop::fternop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwnmsac functions. */ +class vfwnmsac : public fternop +{ +public: + // use the same construction function as the ternop + using fternop::fternop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfsqrt functions. */ +class vfsqrt : public funop +{ +public: + // use the same construction function as the unop + using funop::funop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfrsqrt7 functions. */ +class vfrsqrt7 : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfrec7 functions. */ +class vfrec7 : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfmax functions. */ +class vfmax : public fbinop +{ +public: + // use the same construction function as the fbinop + using fbinop::fbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfmin functions. */ +class vfmin : public fbinop +{ +public: + // use the same construction function as the fbinop + using fbinop::fbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfsgnj, vfsgnjn and vfsgnjx functions. */ +class vfsgnj : public fbinop +{ +public: + // use the same construction function as the fbinop + using fbinop::fbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfsgnjn functions. */ +class vfsgnjn : public fbinop +{ +public: + // use the same construction function as the fbinop + using fbinop::fbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfsgnjx functions. */ +class vfsgnjx : public fbinop +{ +public: + // use the same construction function as the fbinop + using fbinop::fbinop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfabs functions. */ +class vfabs : public funop +{ +public: + // use the same construction function as the unop + using funop::funop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfcmp functions. */ +class vfcmp : public fbinop +{ +public: + // use the same construction function as the fbinop + using fbinop::fbinop; + + virtual char * assemble_name (function_instance &) override; +}; + +/* A function_base for vmfeq functions. */ +class vmfeq : public vfcmp +{ +public: + // use the same construction function as the vfcmp + using vfcmp::vfcmp; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmfne functions. */ +class vmfne : public vfcmp +{ +public: + // use the same construction function as the vfcmp + using vfcmp::vfcmp; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmflt functions. */ +class vmflt : public vfcmp +{ +public: + // use the same construction function as the vfcmp + using vfcmp::vfcmp; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmfle functions. */ +class vmfle : public vfcmp +{ +public: + // use the same construction function as the vfcmp + using vfcmp::vfcmp; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmfgt functions. */ +class vmfgt : public vfcmp +{ +public: + // use the same construction function as the vfcmp + using vfcmp::vfcmp; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmfge functions. */ +class vmfge : public vfcmp +{ +public: + // use the same construction function as the vfcmp + using vfcmp::vfcmp; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfclass functions. */ +class vfclass : public unop +{ +public: + // use the same construction function as the binop + using unop::unop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfmerge functions. */ +class vfmerge : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual size_t get_position_of_dest_arg (enum predication_index) const override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfmv functions. */ +class vfmv : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual bool can_be_overloaded_p (const function_instance &) const override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfcvt_x_f_v functions. */ +class vfcvt_f2i : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfcvt_xu_f_v functions. */ +class vfcvt_f2u : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfcvt_rtz_x_f_v functions. */ +class vfcvt_rtz_f2i : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfcvt_rtz_xu_f_v functions. */ +class vfcvt_rtz_f2u : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfcvt_f_x_v functions. */ +class vfcvt_i2f : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfcvt_f_xu_v functions. */ +class vfcvt_u2f : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwcvt_x_f_v functions. */ +class vfwcvt_f2i : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwcvt_xu_f_v functions. */ +class vfwcvt_f2u : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwcvt_rtz_x_f_v functions. */ +class vfwcvt_rtz_f2i : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwcvt_rtz_xu_f_v functions. */ +class vfwcvt_rtz_f2u : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwcvt_f_x_v functions. */ +class vfwcvt_i2f : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwcvt_f_xu_v functions. */ +class vfwcvt_u2f : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwcvt_f_f_v functions. */ +class vfwcvt_f2f : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfncvt_x_f_w functions. */ +class vfncvt_f2i : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfncvt_xu_f_w functions. */ +class vfncvt_f2u : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfncvt_rtz_x_f_w functions. */ +class vfncvt_rtz_f2i : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfncvt_rtz_xu_f_w functions. */ +class vfncvt_rtz_f2u : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfncvt_f_x_w functions. */ +class vfncvt_i2f : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfncvt_f_xu_w functions. */ +class vfncvt_u2f : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfncvt_f_f_w functions. */ +class vfncvt_f2f : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfncvt_rod_f_f_w functions. */ +class vfncvt_f2rodf : public funop +{ +public: + // use the same construction function as the funop + using funop::funop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vredsum functions. */ +class vredsum : public reduceop +{ +public: + // use the same construction function as the reduceop + using reduceop::reduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vredmax functions. */ +class vredmax : public reduceop +{ +public: + // use the same construction function as the reduceop + using reduceop::reduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vredmaxu functions. */ +class vredmaxu : public reduceop +{ +public: + // use the same construction function as the reduceop + using reduceop::reduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vredmin functions. */ +class vredmin : public reduceop +{ +public: + // use the same construction function as the reduceop + using reduceop::reduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vredminu functions. */ +class vredminu : public reduceop +{ +public: + // use the same construction function as the reduceop + using reduceop::reduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vredand functions. */ +class vredand : public reduceop +{ +public: + // use the same construction function as the reduceop + using reduceop::reduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vredor functions. */ +class vredor : public reduceop +{ +public: + // use the same construction function as the reduceop + using reduceop::reduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vredxor functions. */ +class vredxor : public reduceop +{ +public: + // use the same construction function as the reduceop + using reduceop::reduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vwredsum functions. */ +class vwredsum : public reduceop +{ +public: + // use the same construction function as the reduceop + using reduceop::reduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vwredsumu functions. */ +class vwredsumu : public reduceop +{ +public: + // use the same construction function as the reduceop + using reduceop::reduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for freduceop functions. */ +class freduceop : public reduceop +{ +public: + // use the same construction function as the reduceop + using reduceop::reduceop; + + virtual unsigned int call_properties () const override; +}; + +/* A function_base for vfredosum functions. */ +class vfredosum : public freduceop +{ +public: + // use the same construction function as the freduceop + using freduceop::freduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfredusum functions. */ +class vfredusum : public freduceop +{ +public: + // use the same construction function as the freduceop + using freduceop::freduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfredmax functions. */ +class vfredmax : public freduceop +{ +public: + // use the same construction function as the freduceop + using freduceop::freduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfredmin functions. */ +class vfredmin : public freduceop +{ +public: + // use the same construction function as the freduceop + using freduceop::freduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwredosum functions. */ +class vfwredosum : public freduceop +{ +public: + // use the same construction function as the freduceop + using freduceop::freduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfwredusum functions. */ +class vfwredusum : public freduceop +{ +public: + // use the same construction function as the freduceop + using freduceop::freduceop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmand functions. */ +class vmand : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmor functions. */ +class vmor : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmxor functions. */ +class vmxor : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmnand functions. */ +class vmnand : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmnor functions. */ +class vmnor : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmxnor functions. */ +class vmxnor : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmandn functions. */ +class vmandn : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmorn functions. */ +class vmorn : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmmv functions. */ +class vmmv : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmnot functions. */ +class vmnot : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmclr functions. */ +class vmclr : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual void get_argument_types (const function_instance &, vec &) const override; + + virtual bool can_be_overloaded_p (const function_instance &) const override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmset functions. */ +class vmset : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual void get_argument_types (const function_instance &, vec &) const override; + + virtual bool can_be_overloaded_p (const function_instance &) const override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vcpop functions. */ +class vcpop : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual tree get_return_type (const function_instance &) const override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfirst functions. */ +class vfirst : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual tree get_return_type (const function_instance &) const override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmsbf functions. */ +class vmsbf : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmsif functions. */ +class vmsif : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmsof functions. */ +class vmsof : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for viota functions. */ +class viota : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual bool can_be_overloaded_p (const function_instance &) const override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vid functions. */ +class vid : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual void get_argument_types (const function_instance &, vec &) const override; + + virtual bool can_be_overloaded_p (const function_instance &) const override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmv_x_s functions. */ +class vmv_x_s : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vmv_s_x functions. */ +class vmv_s_x : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfmv_f_s functions. */ +class vfmv_f_s : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfmv_s_f functions. */ +class vfmv_s_f : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual char * assemble_name (function_instance &) override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vslideup functions. */ +class vslideup : public vshift +{ +public: + // use the same construction function as the vshift + using vshift::vshift; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vslidedown functions. */ +class vslidedown : public vshift +{ +public: + // use the same construction function as the vshift + using vshift::vshift; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vslide1up functions. */ +class vslide1up : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vslide1down functions. */ +class vslide1down : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfslide1up functions. */ +class vfslide1up : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vfslide1down functions. */ +class vfslide1down : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vrgather functions. */ +class vrgather : public vshift +{ +public: + // use the same construction function as the vshift + using vshift::vshift; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +/* A function_base for vrgather functions. */ +class vrgatherei16 : public binop +{ +public: + // use the same construction function as the binop + using binop::binop; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + +class vcompress : public unop +{ +public: + // use the same construction function as the unop + using unop::unop; + + virtual size_t get_position_of_dest_arg (enum predication_index) const override; + + virtual rtx expand (const function_instance &, tree, rtx) const override; +}; + } // namespace riscv_vector #endif // end GCC_RISCV_VECTOR_BUILTINS_FUNCTIONS_H \ No newline at end of file diff --git a/gcc/config/riscv/riscv-vector.cc b/gcc/config/riscv/riscv-vector.cc index 1d53c50a751..e892fb05a95 100644 --- a/gcc/config/riscv/riscv-vector.cc +++ b/gcc/config/riscv/riscv-vector.cc @@ -905,7 +905,7 @@ rvv_adjust_frame (rtx target, poly_int64 offset, bool epilogue) } /* Helper functions for handling sew=64 on RV32 system. */ -bool +static bool imm32_p (rtx a) { if (!CONST_SCALAR_INT_P (a)) @@ -928,7 +928,7 @@ enum GEN_CLASS }; /* Helper functions for handling sew=64 on RV32 system. */ -enum GEN_CLASS +static enum GEN_CLASS modify_operands (machine_mode Vmode, machine_mode VSImode, machine_mode VMSImode, machine_mode VSUBmode, rtx *operands, bool (*imm5_p) (rtx), int i, bool reverse, unsigned int unspec) @@ -970,7 +970,7 @@ modify_operands (machine_mode Vmode, machine_mode VSImode, } /* Helper functions for handling sew=64 on RV32 system. */ -bool +static bool emit_op5_vmv_v_x (machine_mode Vmode, machine_mode VSImode, machine_mode VMSImode, machine_mode VSUBmode, rtx *operands, int i) @@ -994,6 +994,51 @@ emit_op5_vmv_v_x (machine_mode Vmode, machine_mode VSImode, return false; } +/* Helper functions for handling sew=64 on RV32 system. */ +static bool +emit_op5_vmv_s_x (machine_mode Vmode, machine_mode VSImode, + machine_mode VSUBmode, rtx *operands, int i) +{ + if (!TARGET_64BIT && VSUBmode == DImode) + { + if (!imm32_p (operands[i])) + { + rtx s = operands[i]; + if (CONST_SCALAR_INT_P (s)) + { + s = force_reg (DImode, s); + } + + rtx hi = gen_highpart (SImode, s); + rtx lo = gen_lowpart (SImode, s); + rtx vlx2 = gen_vlx2 (operands[3], Vmode, VSImode); + + rtx vret = operands[0]; + rtx vd = operands[1]; + if (vd == const0_rtx) + { + vd = gen_reg_rtx (Vmode); + } + rtx vd_si = gen_lowpart (VSImode, vd); + + emit_insn (gen_vslide_vx (UNSPEC_SLIDEDOWN, VSImode, vd_si, + const0_rtx, vd_si, vd_si, const2_rtx, vlx2, + operands[4])); + emit_insn (gen_vslide1_vx_internal (UNSPEC_SLIDE1UP, VSImode, vd_si, + const0_rtx, vd_si, vd_si, hi, + vlx2, operands[4])); + emit_insn (gen_vslide1_vx_internal (UNSPEC_SLIDE1UP, VSImode, vd_si, + const0_rtx, vd_si, vd_si, lo, vlx2, + operands[4])); + + emit_insn (gen_rtx_SET (vret, gen_lowpart (Vmode, vd_si))); + + return true; + } + } + return false; +} + /* Helper functions for handling sew=64 on RV32 system. */ void emit_op5 (unsigned int unspec, machine_mode Vmode, machine_mode VSImode, @@ -1008,6 +1053,13 @@ emit_op5 (unsigned int unspec, machine_mode Vmode, machine_mode VSImode, return; } } + else if (unspec == UNSPEC_VMVS) + { + if (emit_op5_vmv_s_x (Vmode, VSImode, VSUBmode, operands, i)) + { + return; + } + } enum GEN_CLASS gen_class = modify_operands ( Vmode, VSImode, VMSImode, VSUBmode, operands, imm5_p, i, reverse, unspec); @@ -1038,6 +1090,85 @@ emit_op6 (unsigned int unspec ATTRIBUTE_UNUSED, machine_mode Vmode, operands[4], operands[5])); } +/* Helper functions for handling sew=64 on RV32 system. */ +static bool +emit_op7_slide1 (unsigned int unspec, machine_mode Vmode, machine_mode VSImode, + machine_mode VSUBmode, rtx *operands, int i) +{ + if (!TARGET_64BIT && VSUBmode == DImode) + { + if (!imm32_p (operands[i])) + { + rtx s = operands[i]; + if (CONST_SCALAR_INT_P (s)) + { + s = force_reg (DImode, s); + } + + rtx hi = gen_highpart (SImode, s); + rtx lo = gen_lowpart (SImode, s); + + rtx vret = operands[0]; + rtx mask = operands[1]; + rtx vs = operands[3]; + rtx avl = operands[5]; + rtx vlx2 = gen_vlx2 (avl, Vmode, VSImode); + rtx vs_si = gen_lowpart (VSImode, vs); + rtx vtemp; + if (rtx_equal_p (operands[2], const0_rtx)) + { + vtemp = gen_reg_rtx (VSImode); + } + else + { + vtemp = gen_lowpart (VSImode, operands[2]); + } + + if (unspec == UNSPEC_SLIDE1UP) + { + rtx v1 = gen_reg_rtx (VSImode); + + emit_insn (gen_vslide1_vx_internal (UNSPEC_SLIDE1UP, VSImode, v1, + const0_rtx, const0_rtx, vs_si, + hi, vlx2, operands[6])); + emit_insn (gen_vslide1_vx_internal (UNSPEC_SLIDE1UP, VSImode, + vtemp, const0_rtx, const0_rtx, + v1, lo, vlx2, operands[6])); + } + else + { + emit_insn (gen_vslide1_vx_internal ( + UNSPEC_SLIDE1DOWN, VSImode, vtemp, const0_rtx, const0_rtx, + vs_si, force_reg (GET_MODE (lo), lo), vlx2, operands[6])); + emit_insn (gen_vslide1_vx_internal ( + UNSPEC_SLIDE1DOWN, VSImode, vtemp, const0_rtx, const0_rtx, + vtemp, force_reg (GET_MODE (hi), hi), vlx2, operands[6])); + } + + if (rtx_equal_p (mask, const0_rtx)) + { + emit_insn (gen_rtx_SET (vret, gen_lowpart (Vmode, vtemp))); + } + else + { + rtx dest = operands[2]; + if (rtx_equal_p (dest, const0_rtx)) + { + dest = vret; + } + emit_insn (gen_vmerge_vvm (Vmode, dest, mask, dest, dest, + gen_lowpart (Vmode, vtemp), + force_reg_for_over_uimm (avl), + operands[6])); + + emit_insn (gen_rtx_SET (vret, dest)); + } + + return true; + } + } + return false; +} /* Helper functions for handling sew=64 on RV32 system. */ void @@ -1046,6 +1177,14 @@ emit_op7 (unsigned int unspec, machine_mode Vmode, machine_mode VSImode, gen_7 *gen_vx, gen_7 *gen_vx_32bit, gen_7 *gen_vv, imm_p *imm5_p, int i, bool reverse) { + if (unspec == UNSPEC_SLIDE1UP || unspec == UNSPEC_SLIDE1DOWN) + { + if (emit_op7_slide1 (unspec, Vmode, VSImode, VSUBmode, operands, i)) + { + return; + } + } + enum GEN_CLASS gen_class = modify_operands ( Vmode, VSImode, VMSImode, VSUBmode, operands, imm5_p, i, reverse, unspec); diff --git a/gcc/config/riscv/vector-iterators.md b/gcc/config/riscv/vector-iterators.md index 748025d4080..501980d822f 100644 --- a/gcc/config/riscv/vector-iterators.md +++ b/gcc/config/riscv/vector-iterators.md @@ -792,6 +792,9 @@ UNSPEC_VMIN UNSPEC_VMINU UNSPEC_VMAX UNSPEC_VMAXU UNSPEC_VMUL UNSPEC_VMULH UNSPEC_VMULHU UNSPEC_VMULHSU UNSPEC_VDIV UNSPEC_VDIVU UNSPEC_VREM UNSPEC_VREMU + UNSPEC_VSADD UNSPEC_VSADDU UNSPEC_VSSUB UNSPEC_VSSUBU + UNSPEC_VAADD UNSPEC_VAADDU UNSPEC_VASUB UNSPEC_VASUBU + UNSPEC_VSMUL ]) (define_int_iterator VXMOP [ diff --git a/gcc/config/riscv/vector.md b/gcc/config/riscv/vector.md index cb8bdc5781f..118d3ce6f61 100644 --- a/gcc/config/riscv/vector.md +++ b/gcc/config/riscv/vector.md @@ -1222,6 +1222,77 @@ } ) +;; vmv.s.x +(define_expand "@v_s_x" + [(unspec [ + (match_operand:VI 0 "register_operand") + (match_operand:VI 1 "vector_reg_or_const0_operand") + (match_operand: 2 "reg_or_const_int_operand") + (match_operand 3 "p_reg_or_const_csr_operand") + (match_operand 4 "const_int_operand") + ] VMVSOP)] + "TARGET_VECTOR" + { + emit_op5 ( + , + mode, mode, mode, + mode, + operands, + gen_v_s_x_internal, + gen_v_s_x_32bit, + NULL, + satisfies_constraint_, + 2, false + ); + DONE; + } +) + +;; vslide1 +(define_expand "@vslide1_vx" + [(unspec [ + (match_operand:VI 0 "register_operand") + (match_operand: 1 "vector_reg_or_const0_operand") + (match_operand:VI 2 "vector_reg_or_const0_operand") + (match_operand:VI 3 "register_operand") + (match_operand: 4 "reg_or_const_int_operand") + (match_operand 5 "reg_or_const_int_operand") + (match_operand 6 "const_int_operand") + ] VSLIDE1)] + "TARGET_VECTOR" + { + emit_op7 ( + , + mode, mode, mode, + mode, + operands, + gen_vslide1_vx_internal, + gen_vslide1_vx_32bit, + NULL, + satisfies_constraint_, + 4, false + ); + DONE; + } +) + +;; helper expand to double the vl operand +(define_expand "vmv_vlx2_help" + [ + (set (match_operand:SI 0 "register_operand") + (ashift:SI (match_operand:SI 1 "register_operand") + (const_int 1))) + (set (match_operand:SI 2 "register_operand") + (ltu:SI (match_dup 0) (match_dup 1))) + (set (match_dup 2) + (minus:SI (reg:SI X0_REGNUM) (match_dup 2))) + (set (match_dup 0) + (ior:SI (match_dup 0) (match_dup 2))) + ] + "TARGET_VECTOR" + "" +) + ;; ------------------------------------------------------------------------------- ;; ---- 11. Vector Integer Arithmetic Instructions ;; ------------------------------------------------------------------------------- @@ -3521,14 +3592,14 @@ "vmv.v.v\t%0,%2" [(set_attr "type" "vmove") (set_attr "mode" "")]) - + ;; Vector-Scalar Integer Move. (define_insn "@vmv_v_x_internal" [(set (match_operand:VI 0 "register_operand" "=vr,vr,vr,vr") (unspec:VI [(match_operand:VI 1 "vector_reg_or_const0_operand" "0,0,J,J") (vec_duplicate:VI - (match_operand: 2 "reg_or_simm5_operand" "r,Ws5,r,Ws5")) + (match_operand: 2 "reg_or_simm5_operand" "r,Ws5,r,Ws5")) (match_operand 3 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") (match_operand 4 "const_int_operand") (reg:SI VL_REGNUM) @@ -3561,46 +3632,2401 @@ [(set_attr "type" "vmove") (set_attr "mode" "")]) -;; Vector-Scalar Floating-Point Move. -(define_insn "@vfmv_v_f" - [(set (match_operand:VF 0 "register_operand" "=vr,vr") - (unspec:VF - [(match_operand:VF 1 "vector_reg_or_const0_operand" "0,J") - (vec_duplicate:VF - (match_operand: 2 "register_operand" "f,f")) - (match_operand 3 "p_reg_or_const_csr_operand" "rK,rK") - (match_operand 4 "const_int_operand") +;; ------------------------------------------------------------------------------- +;; ---- 12. Vector Fixed-Point Arithmetic Instructions +;; ------------------------------------------------------------------------------- +;; Includes: +;; - 12.1 Vector Single-Width Saturating Add and Subtract +;; - 12.2 Vector Single-Width Aaveraging Add and Subtract +;; - 12.3 Vector Single-Width Fractional Multiply with Rounding and Saturation +;; - 12.5 Vector Single-Width Scaling Shift Instructions +;; - 12.6 Vector Narrowing Fixed-Point Clip Instructions +;; ------------------------------------------------------------------------------- + +;; Vector-Vector Single-Width Saturating Add. +(define_insn "@v_vv" + [(set (match_operand:VI 0 "register_operand" "=vd,vd,vd,vd,vr,vr,vr,vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (any_satplus:VI + (match_operand:VI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (match_operand:VI 4 "vector_arith_operand" "vr,vi,vr,vi,vr,vi,vr,vi")) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand 6 "const_int_operand") (reg:SI VL_REGNUM) (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] - "TARGET_VECTOR" - "vfmv.v.f\t%0,%2" - [(set_attr "type" "vmove") - (set_attr "mode" "")]) + "TARGET_VECTOR" + "@ + v.vv\t%0,%3,%4,%1.t + v.vi\t%0,%3,%v4,%1.t + v.vv\t%0,%3,%4,%1.t + v.vi\t%0,%3,%v4,%1.t + v.vv\t%0,%3,%4 + v.vi\t%0,%3,%v4 + v.vv\t%0,%3,%4 + v.vi\t%0,%3,%v4" + [(set_attr "type" "vsarith") + (set_attr "mode" "")]) -;; vmclr.m vd -> vmxor.mm vd,vd,vd # Clear mask register -(define_insn "@vmclr_m" - [(set (match_operand:VB 0 "register_operand" "=vr") - (unspec:VB - [(vec_duplicate:VB (const_int 0)) - (match_operand 1 "p_reg_or_const_csr_operand" "rK") - (match_operand 2 "const_int_operand") - (reg:SI VL_REGNUM) - (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] - "TARGET_VECTOR" - "vmclr.m\t%0" - [(set_attr "type" "vmask") - (set_attr "mode" "")]) +;; Vector-Vector Single-Width Saturating Sub. +(define_insn "@vsssub_vv" + [(set (match_operand:VI 0 "register_operand" "=vd,vd,vd,vd,vr,vr,vr,vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (ss_minus:VI + (match_operand:VI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (match_operand:VI 4 "vector_neg_arith_operand" "vr,vj,vr,vj,vr,vj,vr,vj")) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vssub.vv\t%0,%3,%4,%1.t + vsadd.vi\t%0,%3,%V4,%1.t + vssub.vv\t%0,%3,%4,%1.t + vsadd.vi\t%0,%3,%V4,%1.t + vssub.vv\t%0,%3,%4 + vsadd.vi\t%0,%3,%V4 + vssub.vv\t%0,%3,%4 + vsadd.vi\t%0,%3,%V4" + [(set_attr "type" "vsarith") + (set_attr "mode" "")]) + +(define_insn "@vussub_vv" + [(set (match_operand:VI 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (us_minus:VI + (match_operand:VI 3 "register_operand" "vr,vr,vr,vr") + (match_operand:VI 4 "register_operand" "vr,vr,vr,vr")) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vssubu.vv\t%0,%3,%4,%1.t + vssubu.vv\t%0,%3,%4,%1.t + vssubu.vv\t%0,%3,%4 + vssubu.vv\t%0,%3,%4" + [(set_attr "type" "vsarith") + (set_attr "mode" "")]) -;; vmset.m vd -> vmxnor.mm vd,vd,vd # Set mask register -(define_insn "@vmset_m" - [(set (match_operand:VB 0 "register_operand" "=vr") - (unspec:VB - [(vec_duplicate:VB (const_int 1)) - (match_operand 1 "p_reg_or_const_csr_operand" "rK") - (match_operand 2 "const_int_operand") - (reg:SI VL_REGNUM) - (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] - "TARGET_VECTOR" - "vmset.m\t%0" - [(set_attr "type" "vmask") - (set_attr "mode" "")]) \ No newline at end of file +;; Vector-Scalar Single-Width Saturating Add. +(define_insn "@v_vx_internal" + [(set (match_operand:VI 0 "register_operand" "=vd,vd,vd,vd,vr,vr,vr,vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (any_satplus:VI + (match_operand:VI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (vec_duplicate:VI + (match_operand: 4 "reg_or_simm5_operand" "r,Ws5,r,Ws5,r,Ws5,r,Ws5"))) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + v.vx\t%0,%3,%4,%1.t + v.vi\t%0,%3,%4,%1.t + v.vx\t%0,%3,%4,%1.t + v.vi\t%0,%3,%4,%1.t + v.vx\t%0,%3,%4 + v.vi\t%0,%3,%4 + v.vx\t%0,%3,%4 + v.vi\t%0,%3,%4" + [(set_attr "type" "vsarith") + (set_attr "mode" "")]) + +(define_insn "@v_vx_32bit" + [(set (match_operand:V64BITI 0 "register_operand" "=vd,vd,vd,vd,vr,vr,vr,vr") + (unspec:V64BITI + [(unspec:V64BITI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (any_satplus:V64BITI + (match_operand:V64BITI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (vec_duplicate:V64BITI + (sign_extend: + (match_operand:SI 4 "reg_or_simm5_operand" "r,Ws5,r,Ws5,r,Ws5,r,Ws5")))) + (match_operand:V64BITI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand:SI 5 "csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand:SI 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + v.vx\t%0,%3,%4,%1.t + v.vi\t%0,%3,%4,%1.t + v.vx\t%0,%3,%4,%1.t + v.vi\t%0,%3,%4,%1.t + v.vx\t%0,%3,%4 + v.vi\t%0,%3,%4 + v.vx\t%0,%3,%4 + v.vi\t%0,%3,%4" + [(set_attr "type" "vsarith") + (set_attr "mode" "")]) + +;; Vector-Scalar Single-Width Saturating Sub. +(define_insn "@vsssub_vx_internal" + [(set (match_operand:VI 0 "register_operand" "=vd,vd,vd,vd,vr,vr,vr,vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (ss_minus:VI + (match_operand:VI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (vec_duplicate:VI + (match_operand: 4 "reg_or_neg_simm5_operand" "r,Wn5,r,Wn5,r,Wn5,r,Wn5"))) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + { + const char *tail = satisfies_constraint_J (operands[1]) ? "" : ",%1.t"; + char buf[64] = {0}; + if (satisfies_constraint_Wn5 (operands[4])) + { + const char *insn = "vsadd.vi\t%0,%3"; + snprintf (buf, sizeof (buf), "%s,%d%s", insn, (int)(-INTVAL (operands[4])), tail); + } + else + { + const char *insn = "vssub.vx\t%0,%3,%4"; + snprintf (buf, sizeof (buf), "%s%s", insn, tail); + } + output_asm_insn (buf, operands); + return ""; + } + [(set_attr "type" "vsarith") + (set_attr "mode" "")]) + +(define_insn "@vussub_vx_internal" + [(set (match_operand:VI 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (us_minus:VI + (match_operand:VI 3 "register_operand" "vr,vr,vr,vr") + (vec_duplicate:VI + (match_operand: 4 "register_operand" "r,r,r,r"))) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vssubu.vx\t%0,%3,%4,%1.t + vssubu.vx\t%0,%3,%4,%1.t + vssubu.vx\t%0,%3,%4 + vssubu.vx\t%0,%3,%4" + [(set_attr "type" "vsarith") + (set_attr "mode" "")]) + +(define_insn "@vsssub_vx_32bit" + [(set (match_operand:V64BITI 0 "register_operand" "=vd,vd,vd,vd,vr,vr,vr,vr") + (unspec:V64BITI + [(unspec:V64BITI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (ss_minus:V64BITI + (match_operand:V64BITI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (vec_duplicate:V64BITI + (sign_extend: + (match_operand:SI 4 "reg_or_neg_simm5_operand" "r,Wn5,r,Wn5,r,Wn5,r,Wn5")))) + (match_operand:V64BITI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand:SI 5 "csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand:SI 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + { + const char *tail = satisfies_constraint_J (operands[1]) ? "" : ",%1.t"; + char buf[64] = {0}; + if (satisfies_constraint_Wn5 (operands[4])) + { + const char *insn = "vsadd.vi\t%0,%3"; + snprintf (buf, sizeof (buf), "%s,%d%s", insn, (int)(-INTVAL (operands[4])), tail); + } + else + { + const char *insn = "vssub.vx\t%0,%3,%4"; + snprintf (buf, sizeof (buf), "%s%s", insn, tail); + } + output_asm_insn (buf, operands); + return ""; + } + [(set_attr "type" "vsarith") + (set_attr "mode" "")]) + +(define_insn "@vussub_vx_32bit" + [(set (match_operand:V64BITI 0 "register_operand" "=vd,vd,vr,vr") + (unspec:V64BITI + [(unspec:V64BITI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (us_minus:V64BITI + (match_operand:V64BITI 3 "register_operand" "vr,vr,vr,vr") + (vec_duplicate:V64BITI + (sign_extend: + (match_operand:SI 4 "register_operand" "r,r,r,r")))) + (match_operand:V64BITI 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand:SI 5 "csr_operand" "rK,rK,rK,rK") + (match_operand:SI 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vssubu.vx\t%0,%3,%4,%1.t + vssubu.vx\t%0,%3,%4,%1.t + vssubu.vx\t%0,%3,%4 + vssubu.vx\t%0,%3,%4" + [(set_attr "type" "vsarith") + (set_attr "mode" "")]) + +;; Vector-Vector Single-Width Averaging Add and Subtract. +;; Vector-Vector Single-Width Fractional Multiply with Rounding and Saturation. +(define_insn "@v_vv" + [(set (match_operand:VI 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec:VI + [(match_operand:VI 3 "register_operand" "vr,vr,vr,vr") + (match_operand:VI 4 "register_operand" "vr,vr,vr,vr")] SAT_OP) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + v.vv\t%0,%3,%4,%1.t + v.vv\t%0,%3,%4,%1.t + v.vv\t%0,%3,%4 + v.vv\t%0,%3,%4" + [(set_attr "type" "") + (set_attr "mode" "")]) + +;; Vector-Scalar Single-Width Averaging Add and Subtract. +;; Vector-Scalar Single-Width Fractional Multiply with Rounding and Saturation. +(define_insn "@v_vx_internal" + [(set (match_operand:VI 0 "register_operand" "=vd,vd,vd,vd,vr,vr,vr,vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (unspec:VI + [(match_operand:VI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (vec_duplicate:VI + (match_operand: 4 "reg_or_0_operand" "r,J,r,J,r,J,r,J"))] SAT_OP) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + v.vx\t%0,%3,%4,%1.t + v.vx\t%0,%3,zero,%1.t + v.vx\t%0,%3,%4,%1.t + v.vx\t%0,%3,zero,%1.t + v.vx\t%0,%3,%4 + v.vx\t%0,%3,zero + v.vx\t%0,%3,%4 + v.vx\t%0,%3,zero" + [(set_attr "type" "") + (set_attr "mode" "")]) + +(define_insn "@v_vx_32bit" + [(set (match_operand:V64BITI 0 "register_operand" "=vd,vd,vd,vd,vr,vr,vr,vr") + (unspec:V64BITI + [(unspec:V64BITI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (unspec:V64BITI + [(match_operand:V64BITI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (vec_duplicate:V64BITI + (sign_extend: + (match_operand:SI 4 "reg_or_0_operand" "r,J,r,J,r,J,r,J")))] SAT_OP) + (match_operand:V64BITI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand:SI 5 "csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand:SI 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + v.vx\t%0,%3,%4,%1.t + v.vx\t%0,%3,zero,%1.t + v.vx\t%0,%3,%4,%1.t + v.vx\t%0,%3,zero,%1.t + v.vx\t%0,%3,%4 + v.vx\t%0,%3,zero + v.vx\t%0,%3,%4 + v.vx\t%0,%3,zero" + [(set_attr "type" "") + (set_attr "mode" "")]) + +;; Vector-Vector Single-Width Scaling Shift Instructions. +(define_insn "@v_vv" + [(set (match_operand:VI 0 "register_operand" "=vd,vd,vd,vd,vr,vr,vr,vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (unspec:VI + [(match_operand:VI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (match_operand:VI 4 "vector_shift_operand" "vr,vk,vr,vk,vr,vk,vr,vk")] SSHIFT) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + v.vv\t%0,%3,%4,%1.t + v.vi\t%0,%3,%v4,%1.t + v.vv\t%0,%3,%4,%1.t + v.vi\t%0,%3,%v4,%1.t + v.vv\t%0,%3,%4 + v.vi\t%0,%3,%v4 + v.vv\t%0,%3,%4 + v.vi\t%0,%3,%v4" + [(set_attr "type" "vscaleshift") + (set_attr "mode" "")]) + +;; Vector-Scalar Single-Width Scaling Shift Instructions. +(define_insn "@v_vx" + [(set (match_operand:VI 0 "register_operand" "=vd,vd,vd,vd,vr,vr,vr,vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (unspec:VI + [(match_operand:VI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (match_operand 4 "p_reg_or_uimm5_operand" "r,K,r,K,r,K,r,K")] SSHIFT) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + v.vx\t%0,%3,%4,%1.t + v.vi\t%0,%3,%4,%1.t + v.vx\t%0,%3,%4,%1.t + v.vi\t%0,%3,%4,%1.t + v.vx\t%0,%3,%4 + v.vi\t%0,%3,%4 + v.vx\t%0,%3,%4 + v.vi\t%0,%3,%4" + [(set_attr "type" "vscaleshift") + (set_attr "mode" "")]) + +;; Vector-Vector signed/unsigned clip. +(define_insn "@vn_wv" + [(set (match_operand:VWI 0 "register_operand" "=vd,vd,&vd,vd,&vd, vd,vd,&vd,vd,&vd, vr,vr,&vr,vr,&vr, vr,vr,&vr,vr,&vr") + (unspec:VWI + [(unspec:VWI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,vm, vm,vm,vm,vm,vm, J,J,J,J,J, J,J,J,J,J") + (unspec:VWI + [(match_operand: 3 "register_operand" "0,vr,vr,0,vr, 0,vr,vr,0,vr, 0,vr,vr,0,vr, 0,vr,vr,0,vr") + (match_operand:VWI 4 "vector_shift_operand" "vr,0,vr,vk,vk, vr,0,vr,vk,vk, vr,0,vr,vk,vk, vr,0,vr,vk,vk")] CLIP) + (match_operand:VWI 2 "vector_reg_or_const0_operand" "0,0,0,0,0, J,J,J,J,J, 0,0,0,0,0, J,J,J,J,J") + ] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK,rK, rK,rK,rK,rK,rK, rK,rK,rK,rK,rK, rK,rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vn.wv\t%0,%3,%4,%1.t + vn.wv\t%0,%3,%4,%1.t + vn.wv\t%0,%3,%4,%1.t + vn.wi\t%0,%3,%v4,%1.t + vn.wi\t%0,%3,%v4,%1.t + vn.wv\t%0,%3,%4,%1.t + vn.wv\t%0,%3,%4,%1.t + vn.wv\t%0,%3,%4,%1.t + vn.wi\t%0,%3,%v4,%1.t + vn.wi\t%0,%3,%v4,%1.t + vn.wv\t%0,%3,%4 + vn.wv\t%0,%3,%4 + vn.wv\t%0,%3,%4 + vn.wi\t%0,%3,%v4 + vn.wi\t%0,%3,%v4 + vn.wv\t%0,%3,%4 + vn.wv\t%0,%3,%4 + vn.wv\t%0,%3,%4 + vn.wi\t%0,%3,%v4 + vn.wi\t%0,%3,%v4" + [(set_attr "type" "vclip") + (set_attr "mode" "")]) + +;; Vector-Scalar signed/unsigned clip. +(define_insn "@vn_wx" + [(set (match_operand:VWI 0 "register_operand" "=vd,&vd,vd,&vd, vd,&vd,vd,&vd, vr,&vr,vr,&vr, vr,?&vr,vr,&vr") + (unspec:VWI + [(unspec:VWI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm, vm,vm,vm,vm, J,J,J,J, J,J,J,J") + (unspec:VWI + [(match_operand: 3 "register_operand" "0,vr,0,vr, 0,vr,0,vr, 0,vr,0,vr, 0,vr,0,vr") + (match_operand 4 "p_reg_or_uimm5_operand" "r,r,K,K, r,r,K,K, r,r,K,K, r,r,K,K")] CLIP) + (match_operand:VWI 2 "vector_reg_or_const0_operand" "0,0,0,0, J,J,J,J, 0,0,0,0, J,J,J,J") + ] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK, rK,rK,rK,rK, rK,rK,rK,rK, rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vn.wx\t%0,%3,%4,%1.t + vn.wx\t%0,%3,%4,%1.t + vn.wi\t%0,%3,%4,%1.t + vn.wi\t%0,%3,%4,%1.t + vn.wx\t%0,%3,%4,%1.t + vn.wx\t%0,%3,%4,%1.t + vn.wi\t%0,%3,%4,%1.t + vn.wi\t%0,%3,%4,%1.t + vn.wx\t%0,%3,%4 + vn.wx\t%0,%3,%4 + vn.wi\t%0,%3,%4 + vn.wi\t%0,%3,%4 + vn.wx\t%0,%3,%4 + vn.wx\t%0,%3,%4 + vn.wi\t%0,%3,%4 + vn.wi\t%0,%3,%4" + [(set_attr "type" "vclip") + (set_attr "mode" "")]) + +;; ------------------------------------------------------------------------------- +;; ---- 13. Vector Floating-Point Arithmetic Instructions +;; ------------------------------------------------------------------------------- +;; Includes: +;; - 13.2 Vector Single-Width Floating-Point Add/Subtract Instructions +;; - 13.3 Vector Widening Floating-Point Add/Subtract Instrucions +;; - 13.4 Vector Single-Width Floating-Point Multiply/Divide Instrucions +;; - 13.5 Vector Widening Floating-Point Multiply +;; - 13.6 Vector Single-Width Floating-Point Fused Multiply-Add Instrucions +;; - 13.7 Vector Widening Floating-Point Fused Multiply-Add Instrucions +;; - 13.8 Vector Floating-Point Square-Root Instrucion +;; - 13.9 Vector Floating-Point Reciprocal Square-Root Estimate Instrucion +;; - 13.10 Vector Floating-Point Reciprocal Estimate Instruction +;; - 13.11 Vector Floating-Point MIN/MAX Instrucions +;; - 13.12 Vector Floating-Point Sign-Injection Instrucions +;; - 13.13 Vector Floating-Point Compare Instrucions +;; - 13.14 Vector Floating-Point Classify Instruction +;; - 13.15 Vector Floating-Point Merge Instruction +;; - 13.16 Vector Floating-Point Move Instruction +;; - 13.17 Single-Width Floating-Point/Integer Type-Convert Instructions +;; - 13.18 Widening Floating-Point/Integer Type-Convert Instructions +;; - 13.19 Narrowing Floating-Point/Integer Type-Convert Instructions +;; ------------------------------------------------------------------------------- + +;; Vector-Vector Single-Width Floating-Point Add/Subtract Instructions. +;; Vector-Vector Single-Width Floating-Point Multiply/Divide Instrucions. +;; Vector-Vector Single-Width Floating-Point MIN/MAX Instrucions. +(define_insn "@vf_vv" + [(set (match_operand:VF 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (any_fop:VF + (match_operand:VF 3 "register_operand" "vr,vr,vr,vr") + (match_operand:VF 4 "register_operand" "vr,vr,vr,vr")) + (match_operand:VF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vf.vv\t%0,%3,%4,%1.t + vf.vv\t%0,%3,%4,%1.t + vf.vv\t%0,%3,%4 + vf.vv\t%0,%3,%4" + [(set_attr "type" "") + (set_attr "mode" "")]) + +;; Vector-Scalar Single-Width Floating-Point Add/Subtract Instructions. +;; Vector-Scalar Single-Width Floating-Point Multiply/Divide Instrucions. +;; Vector-Scalar Single-Width Floating-Point MIN/MAX Instrucions. +(define_insn "@vf_vf" + [(set (match_operand:VF 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (any_fop:VF + (match_operand:VF 3 "register_operand" "vr,vr,vr,vr") + (vec_duplicate:VF + (match_operand: 4 "register_operand" "f,f,f,f"))) + (match_operand:VF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vf.vf\t%0,%3,%4,%1.t + vf.vf\t%0,%3,%4,%1.t + vf.vf\t%0,%3,%4 + vf.vf\t%0,%3,%4" + [(set_attr "type" "") + (set_attr "mode" "")]) + +;; Floating-Point Reverse Sub/Div. +(define_insn "@vfr_vf" + [(set (match_operand:VF 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (minus_div:VF + (vec_duplicate:VF + (match_operand: 4 "register_operand" "f,f,f,f")) + (match_operand:VF 3 "register_operand" "vr,vr,vr,vr")) + (match_operand:VF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfr.vf\t%0,%3,%4,%1.t + vfr.vf\t%0,%3,%4,%1.t + vfr.vf\t%0,%3,%4 + vfr.vf\t%0,%3,%4" + [(set_attr "type" "varith") + (set_attr "mode" "")]) + +;; Vector-Vector Widening Float Add/Subtract. +(define_insn "@vfw_vv" + [(set (match_operand: 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (plus_minus: + (float_extend: + (match_operand:VWF 3 "register_operand" "vr,vr,vr,vr")) + (float_extend: + (match_operand:VWF 4 "register_operand" "vr,vr,vr,vr"))) + (match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfw.vv\t%0,%3,%4,%1.t + vfw.vv\t%0,%3,%4,%1.t + vfw.vv\t%0,%3,%4 + vfw.vv\t%0,%3,%4" + [(set_attr "type" "vwarith") + (set_attr "mode" "")]) + +;; Vector-Scalar Widening Float Add/Subtract. +(define_insn "@vfw_vf" + [(set (match_operand: 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (plus_minus: + (float_extend: + (match_operand:VWF 3 "register_operand" "vr,vr,vr,vr")) + (float_extend: + (vec_duplicate:VWF + (match_operand: 4 "register_operand" "f,f,f,f")))) + (match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfw.vf\t%0,%3,%4,%1.t + vfw.vf\t%0,%3,%4,%1.t + vfw.vf\t%0,%3,%4 + vfw.vf\t%0,%3,%4" + [(set_attr "type" "vwarith") + (set_attr "mode" "")]) + +;; Vector-Vector Widening Float Add/Subtract. +(define_insn "@vfw_wv" + [(set (match_operand: 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (plus_minus: + (match_operand: 3 "register_operand" "vr,vr,vr,vr") + (float_extend: + (match_operand:VWF 4 "register_operand" "vr,vr,vr,vr"))) + (match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfw.wv\t%0,%3,%4,%1.t + vfw.wv\t%0,%3,%4,%1.t + vfw.wv\t%0,%3,%4 + vfw.wv\t%0,%3,%4" + [(set_attr "type" "vwarith") + (set_attr "mode" "")]) + +;; Vector-Scalar Widening Float Add/Subtract. +(define_insn "@vfw_wf" + [(set (match_operand: 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (plus_minus: + (match_operand: 3 "register_operand" "vr,vr,vr,vr") + (float_extend: + (vec_duplicate:VWF + (match_operand: 4 "register_operand" "f,f,f,f")))) + (match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfw.wf\t%0,%3,%4,%1.t + vfw.wf\t%0,%3,%4,%1.t + vfw.wf\t%0,%3,%4 + vfw.wf\t%0,%3,%4" + [(set_attr "type" "vwarith") + (set_attr "mode" "")]) + +;; Vector-Vector Widening Float multiply. +(define_insn "@vfwmul_vv" + [(set (match_operand: 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (mult: + (float_extend: + (match_operand:VWF 3 "register_operand" "vr,vr,vr,vr")) + (float_extend: + (match_operand:VWF 4 "register_operand" "vr,vr,vr,vr"))) + (match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwmul.vv\t%0,%3,%4,%1.t + vfwmul.vv\t%0,%3,%4,%1.t + vfwmul.vv\t%0,%3,%4 + vfwmul.vv\t%0,%3,%4" + [(set_attr "type" "vwarith") + (set_attr "mode" "")]) + +;; Vector-Scalar Widening Float multiply. +(define_insn "@vfwmul_vf" + [(set (match_operand: 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (mult: + (float_extend: + (match_operand:VWF 3 "register_operand" "vr,vr,vr,vr")) + (float_extend: + (vec_duplicate:VWF + (match_operand: 4 "register_operand" "f,f,f,f")))) + (match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwmul.vf\t%0,%3,%4,%1.t + vfwmul.vf\t%0,%3,%4,%1.t + vfwmul.vf\t%0,%3,%4 + vfwmul.vf\t%0,%3,%4" + [(set_attr "type" "vwarith") + (set_attr "mode" "")]) + +;; Vector-Vector Single-Width Floating-Point Fused Multiply-Add Instrucions. +(define_insn "@vf_vv" + [(set (match_operand:VF 0 "register_operand" "=vd,vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,J") + (unspec:VF + [(match_operand:VF 2 "register_operand" "0,0") + (match_operand:VF 3 "register_operand" "vr,vr") + (match_operand:VF 4 "register_operand" "vr,vr")] FMAC) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vf.vv\t%0,%3,%4,%1.t + vf.vv\t%0,%3,%4" + [(set_attr "type" "vmadd") + (set_attr "mode" "")]) + +;; Vector-Scalar Single-Width Floating-Point Fused Multiply-Add Instrucions. +(define_insn "@vf_vf" + [(set (match_operand:VF 0 "register_operand" "=vd,vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,J") + (unspec:VF + [(match_operand:VF 2 "register_operand" "0,0") + (vec_duplicate:VF + (match_operand: 3 "register_operand" "f,f")) + (match_operand:VF 4 "register_operand" "vr,vr")] FMAC) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vf.vf\t%0,%3,%4,%1.t + vf.vf\t%0,%3,%4" + [(set_attr "type" "vmadd") + (set_attr "mode" "")]) + +;; Vector-Vector Widening multiply-accumulate, overwrite addend. +;; Vector-Vector Widening multiply-subtract-accumulate, overwrite addend. +(define_insn "@vfwmacc_vv" + [(set (match_operand: 0 "register_operand" "=&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,J") + (plus: + (mult: + (float_extend: + (match_operand:VWF 3 "register_operand" "vr,vr")) + (float_extend: + (match_operand:VWF 4 "register_operand" "vr,vr"))) + (match_operand: 2 "register_operand" "0,0")) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwmacc.vv\t%0,%3,%4,%1.t + vfwmacc.vv\t%0,%3,%4" + [(set_attr "type" "vwmadd") + (set_attr "mode" "")]) + +(define_insn "@vfwmsac_vv" + [(set (match_operand: 0 "register_operand" "=&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,J") + (minus: + (mult: + (float_extend: + (match_operand:VWF 3 "register_operand" "vr,vr")) + (float_extend: + (match_operand:VWF 4 "register_operand" "vr,vr"))) + (match_operand: 2 "register_operand" "0,0")) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwmsac.vv\t%0,%3,%4,%1.t + vfwmsac.vv\t%0,%3,%4" + [(set_attr "type" "vwmadd") + (set_attr "mode" "")]) + +;; Vector-Scalar Widening multiply-accumulate, overwrite addend. +;; Vector-Scalar Widening multiply-subtract-accumulate, overwrite addend. +(define_insn "@vfwmacc_vf" + [(set (match_operand: 0 "register_operand" "=&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,J") + (plus: + (mult: + (float_extend: + (match_operand:VWF 4 "register_operand" "vr,vr")) + (float_extend: + (vec_duplicate:VWF + (match_operand: 3 "register_operand" "f,f")))) + (match_operand: 2 "register_operand" "0,0")) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwmacc.vf\t%0,%3,%4,%1.t + vfwmacc.vf\t%0,%3,%4" + [(set_attr "type" "vwmadd") + (set_attr "mode" "")]) + +(define_insn "@vfwmsac_vf" + [(set (match_operand: 0 "register_operand" "=&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,J") + (minus: + (mult: + (float_extend: + (match_operand:VWF 4 "register_operand" "vr,vr")) + (float_extend: + (vec_duplicate:VWF + (match_operand: 3 "register_operand" "f,f")))) + (match_operand: 2 "register_operand" "0,0")) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwmsac.vf\t%0,%3,%4,%1.t + vfwmsac.vf\t%0,%3,%4" + [(set_attr "type" "vwmadd") + (set_attr "mode" "")]) + +;; Vector-Vector Widening negate-(multiply-accumulate), overwrite addend. +;; Vector-Vector Widening negate-(multiply-subtract-accumulate), overwrite addend. +(define_insn "@vfwnmacc_vv" + [(set (match_operand: 0 "register_operand" "=&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,J") + (neg: + (plus: + (mult: + (float_extend: + (match_operand:VWF 4 "register_operand" "vr,vr")) + (float_extend: + (match_operand:VWF 3 "register_operand" "vr,vr"))) + (match_operand: 2 "register_operand" "0,0"))) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwnmacc.vv\t%0,%3,%4,%1.t + vfwnmacc.vv\t%0,%3,%4" + [(set_attr "type" "vwmadd") + (set_attr "mode" "")]) + +(define_insn "@vfwnmsac_vv" + [(set (match_operand: 0 "register_operand" "=&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,J") + (neg: + (minus: + (mult: + (float_extend: + (match_operand:VWF 4 "register_operand" "vr,vr")) + (float_extend: + (match_operand:VWF 3 "register_operand" "vr,vr"))) + (match_operand: 2 "register_operand" "0,0"))) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwnmsac.vv\t%0,%3,%4,%1.t + vfwnmsac.vv\t%0,%3,%4" + [(set_attr "type" "vwmadd") + (set_attr "mode" "")]) + +;; Vector-Scalar Widening negate-(multiply-accumulate), overwrite addend. +;; Vector-Scalar Widening negate-(multiply-subtract-accumulate), overwrite addend. +(define_insn "@vfwnmacc_vf" + [(set (match_operand: 0 "register_operand" "=&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,J") + (neg: + (plus: + (mult: + (float_extend: + (match_operand:VWF 4 "register_operand" "vr,vr")) + (float_extend: + (vec_duplicate:VWF + (match_operand: 3 "register_operand" "f,f")))) + (match_operand: 2 "register_operand" "0,0"))) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwnmacc.vf\t%0,%3,%4,%1.t + vfwnmacc.vf\t%0,%3,%4" + [(set_attr "type" "vwmadd") + (set_attr "mode" "")]) + +(define_insn "@vfwnmsac_vf" + [(set (match_operand: 0 "register_operand" "=&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,J") + (neg: + (minus: + (mult: + (float_extend: + (match_operand:VWF 4 "register_operand" "vr,vr")) + (float_extend: + (vec_duplicate:VWF + (match_operand: 3 "register_operand" "f,f")))) + (match_operand: 2 "register_operand" "0,0"))) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwnmsac.vf\t%0,%3,%4,%1.t + vfwnmsac.vf\t%0,%3,%4" + [(set_attr "type" "vwmadd") + (set_attr "mode" "")]) + +;; Floating-Point square root. +(define_insn "@vfsqrt_v" + [(set (match_operand:VF 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (sqrt:VF + (match_operand:VF 3 "register_operand" "vr,vr,vr,vr")) + (match_operand:VF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfsqrt.v\t%0,%3,%1.t + vfsqrt.v\t%0,%3,%1.t + vfsqrt.v\t%0,%3 + vfsqrt.v\t%0,%3" + [(set_attr "type" "vfsqrt") + (set_attr "mode" "")]) + +;; Floating-Point Reciprocal Square-Root Estimate. +;; Floating-Point Reciprocal Estimate. +(define_insn "@vf_v" + [(set (match_operand:VF 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec:VF + [(match_operand:VF 3 "register_operand" "vr,vr,vr,vr")] RECIPROCAL) + (match_operand:VF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vf.v\t%0,%3,%1.t + vf.v\t%0,%3,%1.t + vf.v\t%0,%3 + vf.v\t%0,%3" + [(set_attr "type" "vdiv") + (set_attr "mode" "")]) + +;; Vector-Vector Floating-Point Sign-Injection. +(define_insn "@vfsgnj_vv" + [(set (match_operand:VF 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec:VF + [(match_operand:VF 3 "register_operand" "vr,vr,vr,vr") + (match_operand:VF 4 "register_operand" "vr,vr,vr,vr")] COPYSIGNS) + (match_operand:VF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfsgnj.vv\t%0,%3,%4,%1.t + vfsgnj.vv\t%0,%3,%4,%1.t + vfsgnj.vv\t%0,%3,%4 + vfsgnj.vv\t%0,%3,%4" + [(set_attr "type" "vfsgnj") + (set_attr "mode" "")]) + +;; Vector-Scalar Floating-Point Sign-Injection. +(define_insn "@vfsgnj_vf" + [(set (match_operand:VF 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec:VF + [(match_operand:VF 3 "register_operand" "vr,vr,vr,vr") + (vec_duplicate:VF + (match_operand: 4 "register_operand" "f,f,f,f"))] COPYSIGNS) + (match_operand:VF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfsgnj.vf\t%0,%3,%4,%1.t + vfsgnj.vf\t%0,%3,%4,%1.t + vfsgnj.vf\t%0,%3,%4 + vfsgnj.vf\t%0,%3,%4" + [(set_attr "type" "vfsgnj") + (set_attr "mode" "")]) + +;; vfneg.v vd,vs = vfsgnjn.vv vd,vs,vs. +(define_insn "@vfneg_v" + [(set (match_operand:VF 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (neg:VF + (match_operand:VF 3 "register_operand" "vr,vr,vr,vr")) + (match_operand:VF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfneg.v\t%0,%3,%1.t + vfneg.v\t%0,%3,%1.t + vfneg.v\t%0,%3 + vfneg.v\t%0,%3" + [(set_attr "type" "vfsgnj") + (set_attr "mode" "")]) + +;; vfabs.v vd,vs = vfsgnjn.vv vd,vs,vs. +(define_insn "@vfabs_v" + [(set (match_operand:VF 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (abs:VF + (match_operand:VF 3 "register_operand" "vr,vr,vr,vr")) + (match_operand:VF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfabs.v\t%0,%3,%1.t + vfabs.v\t%0,%3,%1.t + vfabs.v\t%0,%3 + vfabs.v\t%0,%3" + [(set_attr "type" "vfsgnj") + (set_attr "mode" "")]) + +;; Vector-Vector Floating-Point Compare Instrucions. +(define_insn "@vmf_vv" + [(set (match_operand: 0 "register_operand" "=vr,vr,vm,&vr, vr,vr,vm,&vr, vr,vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,0,vm, vm,vm,0,vm, J,J,J") + (any_fcmp: + (match_operand:VF 3 "register_operand" "0,vr,vr,vr, 0,vr,vr,vr, 0,vr,vr") + (match_operand:VF 4 "register_operand" "vr,0,vr,vr, vr,0,vr,vr, vr,0,vr")) + (match_operand: 2 "vector_reg_or_const0_operand" "0,0,0,0, J,J,J,J, J,J,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK, rK,rK,rK,rK, rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vmf.vv\t%0,%3,%4,%1.t + vmf.vv\t%0,%3,%4,%1.t + vmf.vv\t%0,%3,%4,%1.t + vmf.vv\t%0,%3,%4,%1.t + vmf.vv\t%0,%3,%4,%1.t + vmf.vv\t%0,%3,%4,%1.t + vmf.vv\t%0,%3,%4,%1.t + vmf.vv\t%0,%3,%4,%1.t + vmf.vv\t%0,%3,%4 + vmf.vv\t%0,%3,%4 + vmf.vv\t%0,%3,%4" + [(set_attr "type" "vcmp") + (set_attr "mode" "")]) + +;; Vector-Scalar Floating-Point Compare Instrucions. +(define_insn "@vmf_vf" + [(set (match_operand: 0 "register_operand" "=vr,vm,&vr, vr,vm,&vr, vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,0,vm, vm,0,vm, J,J") + (any_fcmp: + (match_operand:VF 3 "register_operand" "0,vr,vr, 0,vr,vr, 0,vr") + (vec_duplicate:VF + (match_operand: 4 "register_operand" "f,f,f, f,f,f, f,f"))) + (match_operand: 2 "vector_reg_or_const0_operand" "0,0,0, J,J,J, J,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK, rK,rK,rK, rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vmf.vf\t%0,%3,%4,%1.t + vmf.vf\t%0,%3,%4,%1.t + vmf.vf\t%0,%3,%4,%1.t + vmf.vf\t%0,%3,%4,%1.t + vmf.vf\t%0,%3,%4,%1.t + vmf.vf\t%0,%3,%4,%1.t + vmf.vf\t%0,%3,%4 + vmf.vf\t%0,%3,%4" + [(set_attr "type" "vcmp") + (set_attr "mode" "")]) + +;; Vector-Vector Floating-Point Comparison with no trapping. +;; These are used by auto-vectorization. +(define_expand "@vmf_vv" + [(set (match_operand: 0 "register_operand") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand") + (any_fcmp_no_trapping: + (match_operand:VF 3 "register_operand") + (match_operand:VF 4 "register_operand")) + (match_operand: 2 "vector_reg_or_const0_operand")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand") + (match_operand 6 "const_int_operand")] UNSPEC_RVV))] + "TARGET_VECTOR" +{ + rtx mask = gen_reg_rtx (mode); + if (strcmp ("", "ltgt") == 0) + { + emit_insn (gen_vmf_vv (GT, mode, operands[0], + operands[1], operands[2], operands[3], operands[4], + operands[5], operands[6])); + emit_insn (gen_vmf_vv (GT, mode, mask, + operands[1], operands[2], operands[4], operands[3], + operands[5], operands[6])); + emit_insn (gen_vm_mm (IOR, mode, operands[0], operands[0], mask, + operands[5], operands[6])); + } + else + { + /* Example of implementing isgreater() + vmfeq.vv v0, va, va ;; Only set where A is not NaN. + vmfeq.vv v1, vb, vb ;; Only set where B is not NaN. + vmand.mm v0, v0, v1 ;; Only set where A and B are ordered, + vmfgt.vv v0, va, vb, v0.t ;; so only set flags on ordered values. */ + emit_insn (gen_vmf_vv (EQ, mode, operands[0], + operands[1], operands[2], operands[3], operands[3], + operands[5], operands[6])); + emit_insn (gen_vmf_vv (EQ, mode, mask, + operands[1], operands[2], operands[4], operands[4], + operands[5], operands[6])); + emit_insn (gen_vm_mm (AND, mode, operands[0], operands[0], mask, + operands[5], operands[6])); + + rtx all_ones = gen_reg_rtx (mode); + emit_insn (gen_vmset_m (all_ones, operands[5], + rvv_gen_policy ())); + + if (strcmp ("", "ordered") != 0) + { + if (strcmp ("", "unordered") == 0) + emit_insn (gen_vmnot_m (mode, operands[0], operands[0], operands[5], operands[6])); + else + { + enum rtx_code code = strcmp ("", "unlt") == 0 ? LT : + strcmp ("", "unle") == 0 ? LE : + strcmp ("", "unge") == 0 ? GE : + strcmp ("", "ungt") == 0 ? GT : EQ; + emit_insn (gen_vmf_vv (code, mode, operands[0], + operands[0], all_ones, operands[3], operands[4], + operands[5], operands[6])); + } + } + } + DONE; +}) + +;; Floating-Point Classify Instruction. +(define_insn "@vfclass_v" + [(set (match_operand: 0 "register_operand" "=vd,vd,vr,vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec: + [(match_operand:VF 3 "register_operand" "vr,vr,vr,vr")] UNSPEC_FCLASS) + (match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfclass.v\t%0,%3,%1.t + vfclass.v\t%0,%3,%1.t + vfclass.v\t%0,%3 + vfclass.v\t%0,%3" + [(set_attr "type" "vfclass") + (set_attr "mode" "")]) + +;; Vector-Scalar Floating-Point merge. +(define_insn "@vfmerge_vfm" + [(set (match_operand:VF 0 "register_operand" "=vd,vd") + (unspec:VF + [(match_operand:VF 2 "vector_reg_or_const0_operand" "0,J") + (unspec:VF + [(match_operand: 1 "register_operand" "vm,vm") + (match_operand:VF 3 "register_operand" "vr,vr") + (vec_duplicate:VF + (match_operand: 4 "register_operand" "f,f"))] UNSPEC_MERGE) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfmerge.vfm\t%0,%3,%4,%1 + vfmerge.vfm\t%0,%3,%4,%1" + [(set_attr "type" "vmerge") + (set_attr "mode" "")]) + +;; Vector-Scalar Floating-Point Move. +(define_insn "@vfmv_v_f" + [(set (match_operand:VF 0 "register_operand" "=vr,vr") + (unspec:VF + [(match_operand:VF 1 "vector_reg_or_const0_operand" "0,J") + (vec_duplicate:VF + (match_operand: 2 "register_operand" "f,f")) + (match_operand 3 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 4 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vfmv.v.f\t%0,%2" + [(set_attr "type" "vmove") + (set_attr "mode" "")]) + +;; Convert float to unsigned integer. +;; Convert float to signed integer. +(define_insn "@vfcvt_x_f_v" + [(set (match_operand: 0 "register_operand" "=vd,vd,vr,vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec: + [(match_operand:VF 3 "register_operand" "vr,vr,vr,vr")] FCVT) + (match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfcvt.x.f.v\t%0,%3,%1.t + vfcvt.x.f.v\t%0,%3,%1.t + vfcvt.x.f.v\t%0,%3 + vfcvt.x.f.v\t%0,%3" + [(set_attr "type" "vfcvt") + (set_attr "mode" "")]) + +;; Convert float to unsigned integer, truncating. +;; Convert float to signed integer, truncating. +(define_insn "@vfcvt_rtz_x_f_v" + [(set (match_operand: 0 "register_operand" "=vd,vd,vr,vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (any_fix: + (match_operand:VF 3 "register_operand" "vr,vr,vr,vr")) + (match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfcvt.rtz.x.f.v\t%0,%3,%1.t + vfcvt.rtz.x.f.v\t%0,%3,%1.t + vfcvt.rtz.x.f.v\t%0,%3 + vfcvt.rtz.x.f.v\t%0,%3" + [(set_attr "type" "vfcvt") + (set_attr "mode" "")]) + +;; Convert unsigned integer to float. +;; Convert signed integer to float. +(define_insn "@vfcvt_f_x_v" + [(set (match_operand:VF 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (any_float:VF + (match_operand: 3 "register_operand" "vr,vr,vr,vr")) + (match_operand:VF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfcvt.f.x.v\t%0,%3,%1.t + vfcvt.f.x.v\t%0,%3,%1.t + vfcvt.f.x.v\t%0,%3 + vfcvt.f.x.v\t%0,%3" + [(set_attr "type" "vfcvt") + (set_attr "mode" "")]) + +;; Convert float to double-width unsigned integer. +;; Convert float to double-width signed integer. +(define_insn "@vfwcvt_x_f_v" + [(set (match_operand: 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec: + [(match_operand:VWF 3 "register_operand" "vr,vr,vr,vr")] FCVT) + (match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwcvt.x.f.v\t%0,%3,%1.t + vfwcvt.x.f.v\t%0,%3,%1.t + vfwcvt.x.f.v\t%0,%3 + vfwcvt.x.f.v\t%0,%3" + [(set_attr "type" "vfwcvt") + (set_attr "mode" "")]) + +;; Convert float to double-width unsigned integer, truncating. +;; Convert float to double-width signed integer, truncating. +(define_insn "@vfwcvt_rtz_x_f_v" + [(set (match_operand: 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (any_fix: + (match_operand:VWF 3 "register_operand" "vr,vr,vr,vr")) + (match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwcvt.rtz.x.f.v\t%0,%3,%1.t + vfwcvt.rtz.x.f.v\t%0,%3,%1.t + vfwcvt.rtz.x.f.v\t%0,%3 + vfwcvt.rtz.x.f.v\t%0,%3" + [(set_attr "type" "vfwcvt") + (set_attr "mode" "")]) + +;; Convert unsigned integer to double-width float. +;; Convert signed integer to double-width float. +(define_insn "@vfwcvt_f_x_v" + [(set (match_operand: 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (any_float: + (match_operand:VWINOQI 3 "register_operand" "vr,vr,vr,vr")) + (match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwcvt.f.x.v\t%0,%3,%1.t + vfwcvt.f.x.v\t%0,%3,%1.t + vfwcvt.f.x.v\t%0,%3 + vfwcvt.f.x.v\t%0,%3" + [(set_attr "type" "vfwcvt") + (set_attr "mode" "")]) + +;; Convert single-width float to double-width float +(define_insn "@vfwcvt_f_f_v" + [(set (match_operand: 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (float_extend: + (match_operand:VWF 3 "register_operand" "vr,vr,vr,vr")) + (match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwcvt.f.f.v\t%0,%3,%1.t + vfwcvt.f.f.v\t%0,%3,%1.t + vfwcvt.f.f.v\t%0,%3 + vfwcvt.f.f.v\t%0,%3" + [(set_attr "type" "vfwcvt") + (set_attr "mode" "")]) + +;; Convert double-width float to unsigned integer. +;; Convert double-width float to signed integer. +(define_insn "@vfncvt_x_f_w" + [(set (match_operand:VWINOQI 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec:VWINOQI + [(unspec:VWINOQI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec:VWINOQI + [(match_operand: 3 "register_operand" "vr,vr,vr,vr")] FCVT) + (match_operand:VWINOQI 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfncvt.x.f.w\t%0,%3,%1.t + vfncvt.x.f.w\t%0,%3,%1.t + vfncvt.x.f.w\t%0,%3 + vfncvt.x.f.w\t%0,%3" + [(set_attr "type" "vfncvt") + (set_attr "mode" "")]) + +;; Convert double-width float to unsigned integer, truncating. +;; Convert double-width float to signed integer, truncating. +(define_insn "@vfncvt_rtz_x_f_w" + [(set (match_operand:VWINOQI 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec:VWINOQI + [(unspec:VWINOQI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (any_fix:VWINOQI + (match_operand: 3 "register_operand" "vr,vr,vr,vr")) + (match_operand:VWINOQI 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfncvt.rtz.x.f.w\t%0,%3,%1.t + vfncvt.rtz.x.f.w\t%0,%3,%1.t + vfncvt.rtz.x.f.w\t%0,%3 + vfncvt.rtz.x.f.w\t%0,%3" + [(set_attr "type" "vfncvt") + (set_attr "mode" "")]) + +;; Convert double-width unsigned integer to float. +;; Convert double-width signed integer to float. +(define_insn "@vfncvt_f_x_w" + [(set (match_operand:VWF 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec:VWF + [(unspec:VWF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (any_float:VWF + (match_operand: 3 "register_operand" "vr,vr,vr,vr")) + (match_operand:VWF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfncvt.f.x.w\t%0,%3,%1.t + vfncvt.f.x.w\t%0,%3,%1.t + vfncvt.f.x.w\t%0,%3 + vfncvt.f.x.w\t%0,%3" + [(set_attr "type" "vfncvt") + (set_attr "mode" "")]) + +;; Convert double-width float to single-width float. +(define_insn "@vfncvt_f_f_w" + [(set (match_operand:VWF 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec:VWF + [(unspec:VWF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (float_truncate:VWF + (match_operand: 3 "register_operand" "vr,vr,vr,vr")) + (match_operand:VWF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfncvt.f.f.w\t%0,%3,%1.t + vfncvt.f.f.w\t%0,%3,%1.t + vfncvt.f.f.w\t%0,%3 + vfncvt.f.f.w\t%0,%3" + [(set_attr "type" "vfncvt") + (set_attr "mode" "")]) + +;; Convert double-width float to single-width float, rounding towards odd. +(define_insn "@vfncvt_rod_f_f_w" + [(set (match_operand:VWF 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec:VWF + [(unspec:VWF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec:VWF + [(float_extend:VWF + (match_operand: 3 "register_operand" "vr,vr,vr,vr"))] UNSPEC_ROD) + (match_operand:VWF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfncvt.rod.f.f.w\t%0,%3,%1.t + vfncvt.rod.f.f.w\t%0,%3,%1.t + vfncvt.rod.f.f.w\t%0,%3 + vfncvt.rod.f.f.w\t%0,%3" + [(set_attr "type" "vfncvt") + (set_attr "mode" "")]) + +;; ------------------------------------------------------------------------------- +;; ---- 14. Vector Reduction Operations +;; ------------------------------------------------------------------------------- +;; Includes: +;; - 14.1 Vector Single-Width Integer Reduction Instructions +;; - 14.2 Vector Widening Integer Reduction Instructions +;; - 14.3 Vector Single-Width Floating-Point Reduction +;; - 14.4 Vector Widening Floating-Point Reduction Instructions +;; ------------------------------------------------------------------------------- + +;; Integer simple-reductions. +(define_insn "@vred_vs" + [(set (match_operand: 0 "register_operand" "=vr,vr,vr,vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec: + [(match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J") + (match_operand:VI 3 "register_operand" "vr,vr,vr,vr") + (match_operand: 4 "register_operand" "vr,vr,vr,vr")] REDUC) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vred.vs\t%0,%3,%4,%1.t + vred.vs\t%0,%3,%4,%1.t + vred.vs\t%0,%3,%4 + vred.vs\t%0,%3,%4" + [(set_attr "type" "vreduc") + (set_attr "mode" "")]) + +;; Signed/Unsigned sum reduction into double-width accumulator. +(define_insn "@vwredsum_vs" + [(set (match_operand: 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec: + [(match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J") + (any_extend: + (match_operand:VWREDI 3 "register_operand" "vr,vr,vr,vr")) + (match_operand: 4 "register_operand" "vr,vr,vr,vr")] UNSPEC_REDUC_SUM) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vwredsum.vs\t%0,%3,%4,%1.t + vwredsum.vs\t%0,%3,%4,%1.t + vwredsum.vs\t%0,%3,%4 + vwredsum.vs\t%0,%3,%4" + [(set_attr "type" "vwreduc") + (set_attr "mode" "")]) + +;; Floating-Point simple-reductions. +(define_insn "@vfred_vs" + [(set (match_operand: 0 "register_operand" "=vr,vr,vr,vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec: + [(match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J") + (match_operand:VF 3 "register_operand" "vr,vr,vr,vr") + (match_operand: 4 "register_operand" "vr,vr,vr,vr")] FREDUC) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfred.vs\t%0,%3,%4,%1.t + vfred.vs\t%0,%3,%4,%1.t + vfred.vs\t%0,%3,%4 + vfred.vs\t%0,%3,%4" + [(set_attr "type" "vreduc") + (set_attr "mode" "")]) + +;; unordered sum reduction into double-width accumulator. +(define_insn "@vfwredusum_vs" + [(set (match_operand: 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec: + [(match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J") + (float_extend: + (match_operand:VWREDF 3 "register_operand" "vr,vr,vr,vr")) + (match_operand: 4 "register_operand" "vr,vr,vr,vr")] UNSPEC_REDUC_UNORDERED_SUM) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwredusum.vs\t%0,%3,%4,%1.t + vfwredusum.vs\t%0,%3,%4,%1.t + vfwredusum.vs\t%0,%3,%4 + vfwredusum.vs\t%0,%3,%4" + [(set_attr "type" "vwreduc") + (set_attr "mode" "")]) + +;; ordered sum reduction into double-width accumulator. +(define_insn "@vfwredosum_vs" + [(set (match_operand: 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec: + [(unspec: + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec: + [(match_operand: 2 "vector_reg_or_const0_operand" "0,J,0,J") + (float_extend: + (match_operand:VWREDF 3 "register_operand" "vr,vr,vr,vr")) + (match_operand: 4 "register_operand" "vr,vr,vr,vr")] UNSPEC_REDUC_ORDERED_SUM) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfwredosum.vs\t%0,%3,%4,%1.t + vfwredosum.vs\t%0,%3,%4,%1.t + vfwredosum.vs\t%0,%3,%4 + vfwredosum.vs\t%0,%3,%4" + [(set_attr "type" "vwreduc") + (set_attr "mode" "")]) + +;; ------------------------------------------------------------------------------- +;; ---- 15. Vector Mask Instructions +;; ------------------------------------------------------------------------------- +;; Includes: +;; - 15.1 Vector Mask-Register Logical Instructions +;; - 15.2 Vector mask population count vpopc +;; - 15.3 vfirst find-first-set mask bit +;; - 15.4 vmsbf.m set-before-first mask bit +;; - 15.5 vmsif.m set-including-fisrt mask bit +;; - 15.6 vmsof.m set-only-first mask bit +;; - 15.8 Vector Iota Instruction +;; - 15.9 Vector Element Index Instructions +;; ------------------------------------------------------------------------------- + +;; Vector Mask-Register Logical Instructions. +(define_insn "@vm_mm" + [(set (match_operand:VB 0 "register_operand" "=vr") + (unspec:VB + [(any_bitwise:VB + (match_operand:VB 1 "register_operand" "vr") + (match_operand:VB 2 "register_operand" "vr")) + (match_operand 3 "p_reg_or_const_csr_operand" "rK") + (match_operand 4 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vm.mm\t%0,%1,%2" + [(set_attr "type" "vmask") + (set_attr "mode" "")]) + +(define_insn "@vmn_mm" + [(set (match_operand:VB 0 "register_operand" "=vr") + (unspec:VB + [(not:VB + (any_bitwise:VB + (match_operand:VB 1 "register_operand" "vr") + (match_operand:VB 2 "register_operand" "vr"))) + (match_operand 3 "p_reg_or_const_csr_operand" "rK") + (match_operand 4 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vm.mm\t%0,%1,%2" + [(set_attr "type" "vmask") + (set_attr "mode" "")]) + +(define_insn "@vmnot_mm" + [(set (match_operand:VB 0 "register_operand" "=vr") + (unspec:VB + [(any_logicalnot:VB + (match_operand:VB 1 "register_operand" "vr") + (not:VB + (match_operand:VB 2 "register_operand" "vr"))) + (match_operand 3 "p_reg_or_const_csr_operand" "rK") + (match_operand 4 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vmn.mm\t%0,%1,%2" + [(set_attr "type" "vmask") + (set_attr "mode" "")]) + +;; vmmv.m vd,vs -> vmand.mm vd,vs,vs # Copy mask register +(define_insn "@vmmv_m" + [(set (match_operand:VB 0 "register_operand" "=vr") + (unspec:VB + [(match_operand:VB 1 "register_operand" "vr") + (match_operand 2 "p_reg_or_const_csr_operand" "rK") + (match_operand 3 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vmmv.m\t%0,%1" + [(set_attr "type" "vmask") + (set_attr "mode" "")]) + +;; vmclr.m vd -> vmxor.mm vd,vd,vd # Clear mask register +(define_insn "@vmclr_m" + [(set (match_operand:VB 0 "register_operand" "=vr") + (unspec:VB + [(vec_duplicate:VB (const_int 0)) + (match_operand 1 "p_reg_or_const_csr_operand" "rK") + (match_operand 2 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vmclr.m\t%0" + [(set_attr "type" "vmask") + (set_attr "mode" "")]) + +;; vmset.m vd -> vmxnor.mm vd,vd,vd # Set mask register +(define_insn "@vmset_m" + [(set (match_operand:VB 0 "register_operand" "=vr") + (unspec:VB + [(vec_duplicate:VB (const_int 1)) + (match_operand 1 "p_reg_or_const_csr_operand" "rK") + (match_operand 2 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vmset.m\t%0" + [(set_attr "type" "vmask") + (set_attr "mode" "")]) + +;; vmnot.m vd,vs -> vmnand.mm vd,vs,vs # Invert bits +(define_insn "@vmnot_m" + [(set (match_operand:VB 0 "register_operand" "=vr") + (unspec:VB + [(not:VB + (match_operand:VB 1 "register_operand" "vr")) + (match_operand 2 "p_reg_or_const_csr_operand" "rK") + (match_operand 3 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vmnot.m\t%0,%1" + [(set_attr "type" "vmask") + (set_attr "mode" "")]) + +;; Vector mask population count vpopc +(define_insn "@vcpop__m" + [(set (match_operand:X 0 "register_operand" "=r,r") + (unspec:X + [(unspec:VB + [(match_operand:VB 1 "vector_reg_or_const0_operand" "vm,J") + (match_operand:VB 2 "register_operand" "vr,vr") + ] UNSPEC_VCPOP) + (match_operand 3 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 4 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vcpop.m\t%0,%2,%1.t + vcpop.m\t%0,%2" + [(set_attr "type" "vcpop") + (set_attr "mode" "")]) + +;; vfirst find-first-set mask bit +(define_insn "@vfirst__m" + [(set (match_operand:X 0 "register_operand" "=r,r") + (unspec:X + [(unspec:VB + [(match_operand:VB 1 "vector_reg_or_const0_operand" "vm,J") + (match_operand:VB 2 "register_operand" "vr,vr")] UNSPEC_FIRST) + (match_operand 3 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 4 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfirst.m\t%0,%2,%1.t + vfirst.m\t%0,%2" + [(set_attr "type" "vmsetbit") + (set_attr "mode" "")]) + +;; vmsbf.m set-before-first mask bit. +;; vmsif.m set-including-fisrt mask bit. +;; vmsof.m set-only-first mask bit. +(define_insn "@vm_m" + [(set (match_operand:VB 0 "register_operand" "=&vr,&vr,&vr") + (unspec:VB + [(unspec:VB + [(match_operand:VB 1 "vector_reg_or_const0_operand" "vm,vm,J") + (unspec:VB + [(match_operand:VB 3 "register_operand" "vr,vr,vr")] MASK_SET) + (match_operand:VB 2 "vector_reg_or_const0_operand" "0,J,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vm.m\t%0,%3,%1.t + vm.m\t%0,%3,%1.t + vm.m\t%0,%3" + [(set_attr "type" "vmsetbit") + (set_attr "mode" "")]) + +;; Vector Iota Instruction. +(define_insn "@viota_m" + [(set (match_operand:VI 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec:VI + [(match_operand: 3 "register_operand" "vr,vr,vr,vr")] UNSPEC_IOTA) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + viota.m\t%0,%3,%1.t + viota.m\t%0,%3,%1.t + viota.m\t%0,%3 + viota.m\t%0,%3" + [(set_attr "type" "viota") + (set_attr "mode" "")]) + +;; Vector Element Index Instructions. +(define_insn "@vid_v" + [(set (match_operand:VI 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec:VI + [(match_operand 3 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 4 "const_int_operand")] UNSPEC_ID) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vid.v\t%0,%1.t + vid.v\t%0,%1.t + vid.v\t%0 + vid.v\t%0" + [(set_attr "type" "vid") + (set_attr "mode" "")]) + +;; ------------------------------------------------------------------------------- +;; ---- 16. Vector Permutation Instructions +;; ------------------------------------------------------------------------------- +;; Includes: +;; - 16.1 Integer Scalar Move Instructions +;; - 16.2 Floating-Point Scalar Move Instructions +;; - 16.3 Vector slide Instructins +;; - 16.4 Vector Register Gather Instructions +;; - 16.5 Vector Compress Instructions +;; ------------------------------------------------------------------------------- + +;; Integer Scalar Move Instructions. +(define_insn "@vmv_x_s" + [(set (match_operand: 0 "register_operand" "=r") + (unspec: + [(vec_select: + (match_operand:VNOT64BITI 1 "register_operand" "vr") + (parallel [(const_int 0)])) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vmv.x.s\t%0,%1" + [(set_attr "type" "vmv_x_s") + (set_attr "mode" "")]) + +(define_expand "@vmv_x_s" + [(set (match_operand: 0 "register_operand") + (unspec: + [(vec_select: + (match_operand:V64BITI 1 "register_operand") + (parallel [(const_int 0)])) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + { + if (!TARGET_64BIT) + { + rtx vector = gen_reg_rtx (mode); + rtx shift = gen_reg_rtx (Pmode); + shift = force_reg (Pmode, GEN_INT (32)); + + rtx lo = gen_lowpart (Pmode, operands[0]); + rtx hi = gen_highpart (Pmode, operands[0]); + emit_insn (gen_vlshr_vx (vector, + const0_rtx, const0_rtx, operands[1], + shift, GEN_INT(1), rvv_gen_policy ())); + emit_insn (gen_vmv_x_s_lo (lo, operands[1])); + emit_insn (gen_vmv_x_s_hi (hi, vector)); + DONE; + } + + emit_insn (gen_vmv_x_s_di_internal (operands[0], operands[1])); + DONE; + }) + +(define_insn "vmv_x_s_di_internal" + [(set (match_operand: 0 "register_operand" "=r") + (unspec: + [(vec_select: + (match_operand:V64BITI 1 "register_operand" "vr") + (parallel [(const_int 0)])) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vmv.x.s\t%0,%1" + [(set_attr "type" "vmv_x_s") + (set_attr "mode" "")]) + +(define_insn "vmv_x_s_lo" + [(set (match_operand:SI 0 "register_operand" "=r") + (unspec:SI + [(vec_select:DI + (match_operand:V64BITI 1 "register_operand" "vr") + (parallel [(const_int 0)])) + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_LO))] + "TARGET_VECTOR" + "vmv.x.s\t%0,%1" + [(set_attr "type" "vmv_x_s") + (set_attr "mode" "")]) + +(define_insn "vmv_x_s_hi" + [(set (match_operand:SI 0 "register_operand" "=r") + (unspec:SI + [(vec_select:DI + (match_operand:V64BITI 1 "register_operand" "vr") + (parallel [(const_int 0)])) + (reg:SI VTYPE_REGNUM)] UNSPEC_HI))] + "TARGET_VECTOR" + "vmv.x.s\t%0,%1" + [(set_attr "type" "vmv_x_s") + (set_attr "mode" "")]) + +(define_insn "@vmv_s_x_internal" + [(set (match_operand:VI 0 "register_operand" "=vr,vr,vr,vr") + (unspec:VI + [(unspec:VI + [(vec_duplicate:VI + (match_operand: 2 "reg_or_0_operand" "r,J,r,J")) + (match_operand:VI 1 "vector_reg_or_const0_operand" "0,0,J,J") + (const_int 1)] UNSPEC_VMV_SX) + (match_operand 3 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 4 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vmv.s.x\t%0,%2 + vmv.s.x\t%0,zero + vmv.s.x\t%0,%2 + vmv.s.x\t%0,zero" + [(set_attr "type" "vmv_s_x") + (set_attr "mode" "")]) + +(define_insn "@vmv_s_x_32bit" + [(set (match_operand:V64BITI 0 "register_operand" "=vr,vr,vr,vr") + (unspec:V64BITI + [(unspec:V64BITI + [(vec_duplicate:V64BITI + (sign_extend: + (match_operand:SI 2 "reg_or_0_operand" "r,J,r,J"))) + (match_operand:V64BITI 1 "vector_reg_or_const0_operand" "0,0,J,J") + (const_int 1)] UNSPEC_VMV_SX) + (match_operand:SI 3 "csr_operand" "rK,rK,rK,rK") + (match_operand:SI 4 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vmv.s.x\t%0,%2 + vmv.s.x\t%0,zero + vmv.s.x\t%0,%2 + vmv.s.x\t%0,zero" + [(set_attr "type" "vmv_s_x") + (set_attr "mode" "")]) + +;; This pattern is used by auto-vectorization to +;; initiate a vector whose value of element 0 is +;; zero. We dont't want to use subreg to generate +;; transformation between floating-point and integer. +(define_insn "@vmv_s_x_internal" + [(set (match_operand:VF 0 "register_operand" "=vr") + (unspec:VF + [(unspec:VF + [(const_int 0) + (const_int 1)] UNSPEC_VMV_SX) + (match_operand 1 "p_reg_or_const_csr_operand" "rK") + (match_operand 2 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vmv.s.x\t%0,zero" + [(set_attr "type" "vmv_s_x") + (set_attr "mode" "")]) + +;; Floating-Point Scalar Move Instructions. +(define_insn "@vfmv_f_s" + [(set (match_operand: 0 "register_operand" "=f") + (unspec: + [(vec_select: + (match_operand:VF 1 "register_operand" "vr") + (parallel [(const_int 0)])) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vfmv.f.s\t%0,%1" + [(set_attr "type" "vfmv_f_s") + (set_attr "mode" "")]) + +(define_insn "@vfmv_s_f" + [(set (match_operand:VF 0 "register_operand" "=vr,vr") + (unspec:VF + [(unspec:VF + [(vec_duplicate:VF + (match_operand: 2 "register_operand" "f,f")) + (match_operand:VF 1 "vector_reg_or_const0_operand" "0,J") + (const_int 1)] UNSPEC_VMV_SX) + (match_operand 3 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 4 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vfmv.s.f\t%0,%2" + [(set_attr "type" "vfmv_s_f") + (set_attr "mode" "")]) + +;; Vector Slideup/Slidedown Instructions. +(define_insn "@vslide_vx" + [(set (match_operand:V 0 "register_operand" "=&vr,&vr,&vr,&vr,&vr,&vr,&vr,&vr") + (unspec:V + [(unspec:V + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (unspec:V + [(match_operand:V 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J") + (match_operand:V 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (match_operand 4 "p_reg_or_uimm5_operand" "r,K,r,K,r,K,r,K")] SLIDE_UP) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vslide.vx\t%0,%3,%4,%1.t + vslide.vi\t%0,%3,%4,%1.t + vslide.vx\t%0,%3,%4,%1.t + vslide.vi\t%0,%3,%4,%1.t + vslide.vx\t%0,%3,%4 + vslide.vi\t%0,%3,%4 + vslide.vx\t%0,%3,%4 + vslide.vi\t%0,%3,%4" + [(set_attr "type" "vslide") + (set_attr "mode" "")]) + +(define_insn "@vslide_vx" + [(set (match_operand:V 0 "register_operand" "=vd,vd,vd,vd,vr,vr,vr,vr") + (unspec:V + [(unspec:V + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (unspec:V + [(match_operand:V 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J") + (match_operand:V 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (match_operand 4 "p_reg_or_uimm5_operand" "r,K,r,K,r,K,r,K")] SLIDE_DOWN) + (match_dup 2)] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vslide.vx\t%0,%3,%4,%1.t + vslide.vi\t%0,%3,%4,%1.t + vslide.vx\t%0,%3,%4,%1.t + vslide.vi\t%0,%3,%4,%1.t + vslide.vx\t%0,%3,%4 + vslide.vi\t%0,%3,%4 + vslide.vx\t%0,%3,%4 + vslide.vi\t%0,%3,%4" + [(set_attr "type" "vslide") + (set_attr "mode" "")]) + +;; Vector Integer Slide1up/Slide1down Instructions. +(define_insn "@vslide1_vx_internal" + [(set (match_operand:VI 0 "register_operand" "=&vr,&vr,&vr,&vr,&vr,&vr,&vr,&vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (unspec:VI + [(match_operand:VI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (match_operand: 4 "reg_or_0_operand" "r,J,r,J,r,J,r,J")] SLIDE1_UP) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vslide1.vx\t%0,%3,%4,%1.t + vslide1.vx\t%0,%3,zero,%1.t + vslide1.vx\t%0,%3,%4,%1.t + vslide1.vx\t%0,%3,zero,%1.t + vslide1.vx\t%0,%3,%4 + vslide1.vx\t%0,%3,zero + vslide1.vx\t%0,%3,%4 + vslide1.vx\t%0,%3,zero" + [(set_attr "type" "vslide") + (set_attr "mode" "")]) + +(define_insn "@vslide1_vx_internal" + [(set (match_operand:VI 0 "register_operand" "=vd,vd,vd,vd,vr,vr,vr,vr") + (unspec:VI + [(unspec:VI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (unspec:VI + [(match_operand:VI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (match_operand: 4 "reg_or_0_operand" "r,J,r,J,r,J,r,J")] SLIDE1_DOWN) + (match_operand:VI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vslide1.vx\t%0,%3,%4,%1.t + vslide1.vx\t%0,%3,zero,%1.t + vslide1.vx\t%0,%3,%4,%1.t + vslide1.vx\t%0,%3,zero,%1.t + vslide1.vx\t%0,%3,%4 + vslide1.vx\t%0,%3,zero + vslide1.vx\t%0,%3,%4 + vslide1.vx\t%0,%3,zero" + [(set_attr "type" "vslide") + (set_attr "mode" "")]) + +(define_insn "@vslide1_vx_32bit" + [(set (match_operand:V64BITI 0 "register_operand" "=&vr,&vr,&vr,&vr,&vr,&vr,&vr,&vr") + (unspec:V64BITI + [(unspec:V64BITI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (unspec:V64BITI + [(match_operand:V64BITI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (sign_extend: (match_operand:SI 4 "reg_or_0_operand" "r,J,r,J,r,J,r,J"))] SLIDE1_UP) + (match_operand:V64BITI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand:SI 5 "csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand:SI 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vslide1.vx\t%0,%3,%4,%1.t + vslide1.vx\t%0,%3,zero,%1.t + vslide1.vx\t%0,%3,%4,%1.t + vslide1.vx\t%0,%3,zero,%1.t + vslide1.vx\t%0,%3,%4 + vslide1.vx\t%0,%3,zero + vslide1.vx\t%0,%3,%4 + vslide1.vx\t%0,%3,zero" + [(set_attr "type" "vslide") + (set_attr "mode" "")]) + +(define_insn "@vslide1_vx_32bit" + [(set (match_operand:V64BITI 0 "register_operand" "=vd,vd,vd,vd,vr,vr,vr,vr") + (unspec:V64BITI + [(unspec:V64BITI + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (unspec:V64BITI + [(match_operand:V64BITI 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (sign_extend: (match_operand:SI 4 "reg_or_0_operand" "r,J,r,J,r,J,r,J"))] SLIDE1_DOWN) + (match_operand:V64BITI 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand:SI 5 "csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand:SI 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vslide1.vx\t%0,%3,%4,%1.t + vslide1.vx\t%0,%3,zero,%1.t + vslide1.vx\t%0,%3,%4,%1.t + vslide1.vx\t%0,%3,zero,%1.t + vslide1.vx\t%0,%3,%4 + vslide1.vx\t%0,%3,zero + vslide1.vx\t%0,%3,%4 + vslide1.vx\t%0,%3,zero" + [(set_attr "type" "vslide") + (set_attr "mode" "")]) + +;; Vector Floating-Point Slide1up/Slide1down Instructions. +(define_insn "@vfslide1_vf" + [(set (match_operand:VF 0 "register_operand" "=vd,vd,vr,vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec:VF + [(match_operand:VF 3 "register_operand" "vr,vr,vr,vr") + (match_operand: 4 "register_operand" "f,f,f,f")] SLIDE1_DOWN) + (match_operand:VF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfslide1.vf\t%0,%3,%4,%1.t + vfslide1.vf\t%0,%3,%4,%1.t + vfslide1.vf\t%0,%3,%4 + vfslide1.vf\t%0,%3,%4" + [(set_attr "type" "vslide") + (set_attr "mode" "")]) + +(define_insn "@vfslide1_vf" + [(set (match_operand:VF 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec:VF + [(unspec:VF + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec:VF + [(match_operand:VF 3 "register_operand" "vr,vr,vr,vr") + (match_operand: 4 "register_operand" "f,f,f,f")] SLIDE1_UP) + (match_operand:VF 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vfslide1.vf\t%0,%3,%4,%1.t + vfslide1.vf\t%0,%3,%4,%1.t + vfslide1.vf\t%0,%3,%4 + vfslide1.vf\t%0,%3,%4" + [(set_attr "type" "vslide") + (set_attr "mode" "")]) + +;; Vector-Vector vrgater instruction. +(define_insn "@vrgather_vv" + [(set (match_operand:V 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec:V + [(unspec:V + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec:V + [(match_operand:V 3 "register_operand" "vr,vr,vr,vr") + (match_operand: 4 "register_operand" "vr,vr,vr,vr")] UNSPEC_RGATHER) + (match_operand:V 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vrgather.vv\t%0,%3,%4,%1.t + vrgather.vv\t%0,%3,%4,%1.t + vrgather.vv\t%0,%3,%4 + vrgather.vv\t%0,%3,%4" + [(set_attr "type" "vgather") + (set_attr "mode" "")]) + +;; Vector-Vector vrgaterei16 instruction. +(define_insn "@vrgatherei16_vv" + [(set (match_operand:V16 0 "register_operand" "=&vr,&vr,&vr,&vr") + (unspec:V16 + [(unspec:V16 + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,J,J") + (unspec:V16 + [(match_operand:V16 3 "register_operand" "vr,vr,vr,vr") + (match_operand: 4 "register_operand" "vr,vr,vr,vr")] UNSPEC_RGATHEREI16) + (match_operand:V16 2 "vector_reg_or_const0_operand" "0,J,0,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vrgatherei16.vv\t%0,%3,%4,%1.t + vrgatherei16.vv\t%0,%3,%4,%1.t + vrgatherei16.vv\t%0,%3,%4 + vrgatherei16.vv\t%0,%3,%4" + [(set_attr "type" "vgather") + (set_attr "mode" "")]) + +;; Vector-Scalar vrgater instruction. +(define_insn "@vrgather_vx" + [(set (match_operand:V 0 "register_operand" "=&vr,&vr,&vr,&vr,&vr,&vr,&vr,&vr") + (unspec:V + [(unspec:V + [(match_operand: 1 "vector_reg_or_const0_operand" "vm,vm,vm,vm,J,J,J,J") + (unspec:V + [(match_operand:V 3 "register_operand" "vr,vr,vr,vr,vr,vr,vr,vr") + (match_operand 4 "p_reg_or_uimm5_operand" "r,K,r,K,r,K,r,K")] UNSPEC_RGATHER) + (match_operand:V 2 "vector_reg_or_const0_operand" "0,0,J,J,0,0,J,J")] UNSPEC_SELECT) + (match_operand 5 "p_reg_or_const_csr_operand" "rK,rK,rK,rK,rK,rK,rK,rK") + (match_operand 6 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "@ + vrgather.vx\t%0,%3,%4,%1.t + vrgather.vi\t%0,%3,%4,%1.t + vrgather.vx\t%0,%3,%4,%1.t + vrgather.vi\t%0,%3,%4,%1.t + vrgather.vx\t%0,%3,%4 + vrgather.vi\t%0,%3,%4 + vrgather.vx\t%0,%3,%4 + vrgather.vi\t%0,%3,%4" + [(set_attr "type" "vgather") + (set_attr "mode" "")]) + +;; Vector Compress Instruction. +(define_insn "@vcompress_vm" + [(set (match_operand:V 0 "register_operand" "=&vr,&vr") + (unspec:V + [(unspec:V + [(match_operand: 1 "register_operand" "vm,vm") + (match_operand:V 2 "vector_reg_or_const0_operand" "0,J") + (match_operand:V 3 "register_operand" "vr,vr")] UNSPEC_COMPRESS) + (match_operand 4 "p_reg_or_const_csr_operand" "rK,rK") + (match_operand 5 "const_int_operand") + (reg:SI VL_REGNUM) + (reg:SI VTYPE_REGNUM)] UNSPEC_RVV))] + "TARGET_VECTOR" + "vcompress.vm\t%0,%3,%1" + [(set_attr "type" "vcompress") + (set_attr "mode" "")]) \ No newline at end of file