From patchwork Wed Nov 10 12:43:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 47399 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 8DA5E3857C64 for ; Wed, 10 Nov 2021 12:44:49 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 8DA5E3857C64 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1636548289; bh=MDXPv6xPPnG+7VOeZqtCNLNf4uJks/TTEpMI29xx/cA=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=KErge8dEgBQqjMoDEYlYHHas2fFlyfaoql0eWRigfheliqx7y7Z/e+0APhaDrwYNO P6YYvCiK3XaIvjSRXzSt8uXYCxqHb/YdCfAS+bwiExwQdkB/iTDzNY+K/XUr8iRFml 62fxvqnsthyyfNeXr+YUon1cYfhtwK2NCMzAMClo= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id BFD0D3857818 for ; Wed, 10 Nov 2021 12:43:43 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org BFD0D3857818 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 71D77101E for ; Wed, 10 Nov 2021 04:43:43 -0800 (PST) Received: from localhost (unknown [10.32.98.88]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id EF5AD3F5A1 for ; Wed, 10 Nov 2021 04:43:42 -0800 (PST) To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [PATCH 1/5] Add IFN_COND_FMIN/FMAX functions Date: Wed, 10 Nov 2021 12:43:41 +0000 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Sandiford via Gcc-patches From: Richard Sandiford Reply-To: Richard Sandiford Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" This patch adds conditional forms of FMAX and FMIN, following the pattern for existing conditional binary functions. Tested on aarch64-linux-gnu and x86_64-linux-gnu. OK to install? Richard gcc/ * doc/md.texi (cond_fmin@var{mode}, cond_fmax@var{mode}): Document. * optabs.def (cond_fmin_optab, cond_fmax_optab): New optabs. * internal-fn.def (COND_FMIN, COND_FMAX): New functions. * internal-fn.c (first_commutative_argument): Handle them. (FOR_EACH_COND_FN_PAIR): Likewise. * match.pd (UNCOND_BINARY, COND_BINARY): Likewise. * config/aarch64/aarch64-sve.md (cond_): New pattern. gcc/testsuite/ * gcc.target/aarch64/sve/cond_fmaxnm_5.c: New test. * gcc.target/aarch64/sve/cond_fmaxnm_5_run.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_6.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_6_run.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_7.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_7_run.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_8.c: Likewise. * gcc.target/aarch64/sve/cond_fmaxnm_8_run.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_5.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_5_run.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_6.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_6_run.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_7.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_7_run.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_8.c: Likewise. * gcc.target/aarch64/sve/cond_fminnm_8_run.c: Likewise. --- gcc/config/aarch64/aarch64-sve.md | 19 +++++++++++- gcc/doc/md.texi | 4 +++ gcc/internal-fn.c | 4 +++ gcc/internal-fn.def | 2 ++ gcc/match.pd | 2 ++ gcc/optabs.def | 2 ++ .../gcc.target/aarch64/sve/cond_fmaxnm_5.c | 28 ++++++++++++++++++ .../aarch64/sve/cond_fmaxnm_5_run.c | 4 +++ .../gcc.target/aarch64/sve/cond_fmaxnm_6.c | 22 ++++++++++++++ .../aarch64/sve/cond_fmaxnm_6_run.c | 4 +++ .../gcc.target/aarch64/sve/cond_fmaxnm_7.c | 27 +++++++++++++++++ .../aarch64/sve/cond_fmaxnm_7_run.c | 4 +++ .../gcc.target/aarch64/sve/cond_fmaxnm_8.c | 26 +++++++++++++++++ .../aarch64/sve/cond_fmaxnm_8_run.c | 4 +++ .../gcc.target/aarch64/sve/cond_fminnm_5.c | 29 +++++++++++++++++++ .../aarch64/sve/cond_fminnm_5_run.c | 4 +++ .../gcc.target/aarch64/sve/cond_fminnm_6.c | 23 +++++++++++++++ .../aarch64/sve/cond_fminnm_6_run.c | 4 +++ .../gcc.target/aarch64/sve/cond_fminnm_7.c | 28 ++++++++++++++++++ .../aarch64/sve/cond_fminnm_7_run.c | 4 +++ .../gcc.target/aarch64/sve/cond_fminnm_8.c | 27 +++++++++++++++++ .../aarch64/sve/cond_fminnm_8_run.c | 4 +++ 22 files changed, 274 insertions(+), 1 deletion(-) create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_5.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_5_run.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_6.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_6_run.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_7.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_7_run.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_8.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_8_run.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_5.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_5_run.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_6.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_6_run.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_7.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_7_run.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_8.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_8_run.c diff --git a/gcc/config/aarch64/aarch64-sve.md b/gcc/config/aarch64/aarch64-sve.md index 5de479e141a..0f5bf5ea8cb 100644 --- a/gcc/config/aarch64/aarch64-sve.md +++ b/gcc/config/aarch64/aarch64-sve.md @@ -6287,7 +6287,7 @@ (define_expand "xorsign3" ;; ------------------------------------------------------------------------- ;; Unpredicated fmax/fmin (the libm functions). The optabs for the -;; smin/smax rtx codes are handled in the generic section above. +;; smax/smin rtx codes are handled in the generic section above. (define_expand "3" [(set (match_operand:SVE_FULL_F 0 "register_operand") (unspec:SVE_FULL_F @@ -6302,6 +6302,23 @@ (define_expand "3" } ) +;; Predicated fmax/fmin (the libm functions). The optabs for the +;; smax/smin rtx codes are handled in the generic section above. +(define_expand "cond_" + [(set (match_operand:SVE_FULL_F 0 "register_operand") + (unspec:SVE_FULL_F + [(match_operand: 1 "register_operand") + (unspec:SVE_FULL_F + [(match_dup 1) + (const_int SVE_RELAXED_GP) + (match_operand:SVE_FULL_F 2 "register_operand") + (match_operand:SVE_FULL_F 3 "aarch64_sve_float_maxmin_operand")] + SVE_COND_FP_MAXMIN_PUBLIC) + (match_operand:SVE_FULL_F 4 "aarch64_simd_reg_or_zero")] + UNSPEC_SEL))] + "TARGET_SVE" +) + ;; Predicated floating-point maximum/minimum. (define_insn "@aarch64_pred_" [(set (match_operand:SVE_FULL_F 0 "register_operand" "=w, w, ?&w, ?&w") diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi index 41f1850bf6e..589f841ea74 100644 --- a/gcc/doc/md.texi +++ b/gcc/doc/md.texi @@ -6930,6 +6930,8 @@ operand 0, otherwise (operand 2 + operand 3) is moved. @cindex @code{cond_smax@var{mode}} instruction pattern @cindex @code{cond_umin@var{mode}} instruction pattern @cindex @code{cond_umax@var{mode}} instruction pattern +@cindex @code{cond_fmin@var{mode}} instruction pattern +@cindex @code{cond_fmax@var{mode}} instruction pattern @cindex @code{cond_ashl@var{mode}} instruction pattern @cindex @code{cond_ashr@var{mode}} instruction pattern @cindex @code{cond_lshr@var{mode}} instruction pattern @@ -6947,6 +6949,8 @@ operand 0, otherwise (operand 2 + operand 3) is moved. @itemx @samp{cond_smax@var{mode}} @itemx @samp{cond_umin@var{mode}} @itemx @samp{cond_umax@var{mode}} +@itemx @samp{cond_fmin@var{mode}} +@itemx @samp{cond_fmax@var{mode}} @itemx @samp{cond_ashl@var{mode}} @itemx @samp{cond_ashr@var{mode}} @itemx @samp{cond_lshr@var{mode}} diff --git a/gcc/internal-fn.c b/gcc/internal-fn.c index 0cba95411a6..da7d8355214 100644 --- a/gcc/internal-fn.c +++ b/gcc/internal-fn.c @@ -3840,6 +3840,8 @@ first_commutative_argument (internal_fn fn) case IFN_COND_MUL: case IFN_COND_MIN: case IFN_COND_MAX: + case IFN_COND_FMIN: + case IFN_COND_FMAX: case IFN_COND_AND: case IFN_COND_IOR: case IFN_COND_XOR: @@ -3959,6 +3961,8 @@ conditional_internal_fn_code (internal_fn ifn) /* Invoke T(IFN) for each internal function IFN that also has an IFN_COND_* form. */ #define FOR_EACH_COND_FN_PAIR(T) \ + T (FMAX) \ + T (FMIN) \ T (FMA) \ T (FMS) \ T (FNMA) \ diff --git a/gcc/internal-fn.def b/gcc/internal-fn.def index bb13c6cce1b..bb4d8ab8096 100644 --- a/gcc/internal-fn.def +++ b/gcc/internal-fn.def @@ -188,6 +188,8 @@ DEF_INTERNAL_SIGNED_OPTAB_FN (COND_MIN, ECF_CONST, first, cond_smin, cond_umin, cond_binary) DEF_INTERNAL_SIGNED_OPTAB_FN (COND_MAX, ECF_CONST, first, cond_smax, cond_umax, cond_binary) +DEF_INTERNAL_OPTAB_FN (COND_FMIN, ECF_CONST, cond_fmin, cond_binary) +DEF_INTERNAL_OPTAB_FN (COND_FMAX, ECF_CONST, cond_fmax, cond_binary) DEF_INTERNAL_OPTAB_FN (COND_AND, ECF_CONST | ECF_NOTHROW, cond_and, cond_binary) DEF_INTERNAL_OPTAB_FN (COND_IOR, ECF_CONST | ECF_NOTHROW, diff --git a/gcc/match.pd b/gcc/match.pd index a319aefa808..f7884944571 100644 --- a/gcc/match.pd +++ b/gcc/match.pd @@ -90,12 +90,14 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT) plus minus mult trunc_div trunc_mod rdiv min max + IFN_FMIN IFN_FMAX bit_and bit_ior bit_xor lshift rshift) (define_operator_list COND_BINARY IFN_COND_ADD IFN_COND_SUB IFN_COND_MUL IFN_COND_DIV IFN_COND_MOD IFN_COND_RDIV IFN_COND_MIN IFN_COND_MAX + IFN_COND_FMIN IFN_COND_FMAX IFN_COND_AND IFN_COND_IOR IFN_COND_XOR IFN_COND_SHL IFN_COND_SHR) diff --git a/gcc/optabs.def b/gcc/optabs.def index b889ad2e5a0..e25f4c9a346 100644 --- a/gcc/optabs.def +++ b/gcc/optabs.def @@ -241,6 +241,8 @@ OPTAB_D (cond_smin_optab, "cond_smin$a") OPTAB_D (cond_smax_optab, "cond_smax$a") OPTAB_D (cond_umin_optab, "cond_umin$a") OPTAB_D (cond_umax_optab, "cond_umax$a") +OPTAB_D (cond_fmin_optab, "cond_fmin$a") +OPTAB_D (cond_fmax_optab, "cond_fmax$a") OPTAB_D (cond_fma_optab, "cond_fma$a") OPTAB_D (cond_fms_optab, "cond_fms$a") OPTAB_D (cond_fnma_optab, "cond_fnma$a") diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_5.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_5.c new file mode 100644 index 00000000000..4bae7e02de4 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_5.c @@ -0,0 +1,28 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_fmaxnm_1.c" + +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #0\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #1\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #1\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.h, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.s, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.d, #2\.0} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, z[0-9]+\.h\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, z[0-9]+\.s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, z[0-9]+\.d\n} 1 } } */ + +/* { dg-final { scan-assembler-not {\tmov\tz} } } */ +/* { dg-final { scan-assembler-not {\tmovprfx\t} } } */ +/* { dg-final { scan-assembler-not {\tsel\t} } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_5_run.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_5_run.c new file mode 100644 index 00000000000..1aa2eb4f537 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_5_run.c @@ -0,0 +1,4 @@ +/* { dg-do run { target aarch64_sve_hw } } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_fmaxnm_1_run.c" diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_6.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_6.c new file mode 100644 index 00000000000..912db00466e --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_6.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_fmaxnm_2.c" + +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #0\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #1\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.s, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.d, #2\.0} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, z[0-9]+\.s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, z[0-9]+\.d\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tmovprfx\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s\n} 3 } } */ +/* { dg-final { scan-assembler-times {\tmovprfx\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d\n} 3 } } */ + +/* { dg-final { scan-assembler-not {\tmov\tz} } } */ +/* { dg-final { scan-assembler-not {\tsel\t} } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_6_run.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_6_run.c new file mode 100644 index 00000000000..19f6eddb839 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_6_run.c @@ -0,0 +1,4 @@ +/* { dg-do run { target aarch64_sve_hw } } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_fmaxnm_2_run.c" diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_7.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_7.c new file mode 100644 index 00000000000..30f07f62ddb --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_7.c @@ -0,0 +1,27 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_fmaxnm_3.c" + +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #0\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #1\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.h, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.s, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.d, #2\.0} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, z[0-9]+\.h\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, z[0-9]+\.s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, z[0-9]+\.d\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tsel\tz[0-9]+\.h, p[0-7], z[0-9]+\.h, z[0-9]+\.h\n} 3 } } */ +/* { dg-final { scan-assembler-times {\tsel\tz[0-9]+\.h, p[0-7], z[0-9]+\.h, z[0-9]+\.h\n} 3 } } */ +/* { dg-final { scan-assembler-times {\tsel\tz[0-9]+\.h, p[0-7], z[0-9]+\.h, z[0-9]+\.h\n} 3 } } */ + +/* { dg-final { scan-assembler-not {\tmovprfx\t} } } */ +/* { dg-final { scan-assembler-not {\tmov\tz} } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_7_run.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_7_run.c new file mode 100644 index 00000000000..3e647ed914f --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_7_run.c @@ -0,0 +1,4 @@ +/* { dg-do run { target aarch64_sve_hw } } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_fmaxnm_3_run.c" diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_8.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_8.c new file mode 100644 index 00000000000..a590d382b6a --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_8.c @@ -0,0 +1,26 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_fmaxnm_4.c" + +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #0\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #1\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.h, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.s, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.d, #2\.0} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, z[0-9]+\.h\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, z[0-9]+\.s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, z[0-9]+\.d\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tmovprfx\tz[0-9]+\.s, p[0-7]/z, z[0-9]+\.s\n} 3 } } */ +/* { dg-final { scan-assembler-times {\tmovprfx\tz[0-9]+\.d, p[0-7]/z, z[0-9]+\.d\n} 3 } } */ + +/* { dg-final { scan-assembler-not {\tmov\tz} } } */ +/* { dg-final { scan-assembler-not {\tsel\t} } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_8_run.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_8_run.c new file mode 100644 index 00000000000..d421e54f996 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fmaxnm_8_run.c @@ -0,0 +1,4 @@ +/* { dg-do run { target aarch64_sve_hw } } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_fmaxnm_4_run.c" diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_5.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_5.c new file mode 100644 index 00000000000..290c4beac24 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_5.c @@ -0,0 +1,29 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#define FN(X) __builtin_fmin##X +#include "cond_fmaxnm_1.c" + +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #0\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #1\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #1\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.h, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.s, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.d, #2\.0} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, z[0-9]+\.h\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, z[0-9]+\.s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, z[0-9]+\.d\n} 1 } } */ + +/* { dg-final { scan-assembler-not {\tmov\tz} } } */ +/* { dg-final { scan-assembler-not {\tmovprfx\t} } } */ +/* { dg-final { scan-assembler-not {\tsel\t} } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_5_run.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_5_run.c new file mode 100644 index 00000000000..76baf6a96f5 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_5_run.c @@ -0,0 +1,4 @@ +/* { dg-do run { target aarch64_sve_hw } } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_fminnm_1_run.c" diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_6.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_6.c new file mode 100644 index 00000000000..cc9db999cbd --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_6.c @@ -0,0 +1,23 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#define FN(X) __builtin_fmin##X +#include "cond_fmaxnm_2.c" + +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #0\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #1\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.s, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.d, #2\.0} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, z[0-9]+\.s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, z[0-9]+\.d\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tmovprfx\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s\n} 3 } } */ +/* { dg-final { scan-assembler-times {\tmovprfx\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d\n} 3 } } */ + +/* { dg-final { scan-assembler-not {\tmov\tz} } } */ +/* { dg-final { scan-assembler-not {\tsel\t} } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_6_run.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_6_run.c new file mode 100644 index 00000000000..dbafea1ac6b --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_6_run.c @@ -0,0 +1,4 @@ +/* { dg-do run { target aarch64_sve_hw } } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_fminnm_2_run.c" diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_7.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_7.c new file mode 100644 index 00000000000..347a1a3540b --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_7.c @@ -0,0 +1,28 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#define FN(X) __builtin_fmin##X +#include "cond_fmaxnm_3.c" + +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #0\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #1\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.h, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.s, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.d, #2\.0} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, z[0-9]+\.h\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, z[0-9]+\.s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, z[0-9]+\.d\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tsel\tz[0-9]+\.h, p[0-7], z[0-9]+\.h, z[0-9]+\.h\n} 3 } } */ +/* { dg-final { scan-assembler-times {\tsel\tz[0-9]+\.h, p[0-7], z[0-9]+\.h, z[0-9]+\.h\n} 3 } } */ +/* { dg-final { scan-assembler-times {\tsel\tz[0-9]+\.h, p[0-7], z[0-9]+\.h, z[0-9]+\.h\n} 3 } } */ + +/* { dg-final { scan-assembler-not {\tmovprfx\t} } } */ +/* { dg-final { scan-assembler-not {\tmov\tz} } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_7_run.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_7_run.c new file mode 100644 index 00000000000..6617095fea0 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_7_run.c @@ -0,0 +1,4 @@ +/* { dg-do run { target aarch64_sve_hw } } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_fminnm_3_run.c" diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_8.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_8.c new file mode 100644 index 00000000000..20d6cb505fe --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_8.c @@ -0,0 +1,27 @@ +/* { dg-do compile } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#define FN(X) __builtin_fmin##X +#include "cond_fmaxnm_4.c" + +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #0\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #0\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, #1\.0\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, #1\.0\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.h, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.s, #2\.0} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmov\tz[0-9]+\.d, #2\.0} 1 } } */ + +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.h, p[0-7]/m, z[0-9]+\.h, z[0-9]+\.h\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.s, p[0-7]/m, z[0-9]+\.s, z[0-9]+\.s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tz[0-9]+\.d, p[0-7]/m, z[0-9]+\.d, z[0-9]+\.d\n} 1 } } */ + +/* { dg-final { scan-assembler-times {\tmovprfx\tz[0-9]+\.s, p[0-7]/z, z[0-9]+\.s\n} 3 } } */ +/* { dg-final { scan-assembler-times {\tmovprfx\tz[0-9]+\.d, p[0-7]/z, z[0-9]+\.d\n} 3 } } */ + +/* { dg-final { scan-assembler-not {\tmov\tz} } } */ +/* { dg-final { scan-assembler-not {\tsel\t} } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_8_run.c b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_8_run.c new file mode 100644 index 00000000000..4fb649727d0 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/cond_fminnm_8_run.c @@ -0,0 +1,4 @@ +/* { dg-do run { target aarch64_sve_hw } } */ +/* { dg-options "-O2 -ftree-vectorize" } */ + +#include "cond_fminnm_4_run.c" From patchwork Wed Nov 10 12:44:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 47400 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 0CE003857814 for ; Wed, 10 Nov 2021 12:45:58 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 0CE003857814 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1636548358; bh=Nb6Foz3zRcY+F8LVC+uGQwZ3Rly+zoMJCp+a38TFHLs=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=RMd6U5BjePh2Z/3RNf7mzYsE8XHaXPsd1LvhyNXlz3bqji1oiXSkR4wd11tB0jrGi DLOkNTAZIBrc4OZ5msHUiVytblAwrIb8P4S5aSBFBiBPGrW5THI5SPALhaMY7xeqbZ 3n47jeDFmW8pYd+zpoVyvl8b+1uxwzlx2qkBcrJw= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 4721E3858032 for ; Wed, 10 Nov 2021 12:44:49 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 4721E3858032 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EB39A101E for ; Wed, 10 Nov 2021 04:44:48 -0800 (PST) Received: from localhost (unknown [10.32.98.88]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 91B753F5A1 for ; Wed, 10 Nov 2021 04:44:48 -0800 (PST) To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [PATCH 2/5] gimple-match: Add a gimple_extract_op function Date: Wed, 10 Nov 2021 12:44:47 +0000 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Sandiford via Gcc-patches From: Richard Sandiford Reply-To: Richard Sandiford Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" code_helper and gimple_match_op seem like generally useful ways of summing up a gimple_assign or gimple_call (or gimple_cond). This patch adds a gimple_extract_op function that can be used for that. Tested on aarch64-linux-gnu and x86_64-linux-gnu. OK to install? Richard gcc/ * gimple-match.h (gimple_extract_op): Declare. * gimple-match.c (gimple_extract): New function, extracted from... (gimple_simplify): ...here. (gimple_extract_op): New function. --- gcc/gimple-match-head.c | 261 +++++++++++++++++++++++----------------- gcc/gimple-match.h | 1 + 2 files changed, 149 insertions(+), 113 deletions(-) diff --git a/gcc/gimple-match-head.c b/gcc/gimple-match-head.c index 9d88b2f8551..4c6e0883ba4 100644 --- a/gcc/gimple-match-head.c +++ b/gcc/gimple-match-head.c @@ -890,12 +890,29 @@ try_conditional_simplification (internal_fn ifn, gimple_match_op *res_op, return true; } -/* The main STMT based simplification entry. It is used by the fold_stmt - and the fold_stmt_to_constant APIs. */ +/* Common subroutine of gimple_extract_op and gimple_simplify. Try to + describe STMT in RES_OP. Return: -bool -gimple_simplify (gimple *stmt, gimple_match_op *res_op, gimple_seq *seq, - tree (*valueize)(tree), tree (*top_valueize)(tree)) + - -1 if extraction failed + - otherwise, 0 if no simplification should take place + - otherwise, the number of operands for a GIMPLE_ASSIGN or GIMPLE_COND + - otherwise, -2 for a GIMPLE_CALL + + Before recording an operand, call: + + - VALUEIZE_CONDITION for a COND_EXPR condition + - VALUEIZE_NAME if the rhs of a GIMPLE_ASSIGN is an SSA_NAME + - VALUEIZE_OP for every other top-level operand + + Each routine takes a tree argument and returns a tree. */ + +template +inline int +gimple_extract (gimple *stmt, gimple_match_op *res_op, + ValueizeOp valueize_op, + ValueizeCondition valueize_condition, + ValueizeName valueize_name) { switch (gimple_code (stmt)) { @@ -911,100 +928,53 @@ gimple_simplify (gimple *stmt, gimple_match_op *res_op, gimple_seq *seq, || code == VIEW_CONVERT_EXPR) { tree op0 = TREE_OPERAND (gimple_assign_rhs1 (stmt), 0); - bool valueized = false; - op0 = do_valueize (op0, top_valueize, valueized); - res_op->set_op (code, type, op0); - return (gimple_resimplify1 (seq, res_op, valueize) - || valueized); + res_op->set_op (code, type, valueize_op (op0)); + return 1; } else if (code == BIT_FIELD_REF) { tree rhs1 = gimple_assign_rhs1 (stmt); - tree op0 = TREE_OPERAND (rhs1, 0); - bool valueized = false; - op0 = do_valueize (op0, top_valueize, valueized); + tree op0 = valueize_op (TREE_OPERAND (rhs1, 0)); res_op->set_op (code, type, op0, TREE_OPERAND (rhs1, 1), TREE_OPERAND (rhs1, 2), REF_REVERSE_STORAGE_ORDER (rhs1)); - if (res_op->reverse) - return valueized; - return (gimple_resimplify3 (seq, res_op, valueize) - || valueized); + return res_op->reverse ? 0 : 3; } - else if (code == SSA_NAME - && top_valueize) + else if (code == SSA_NAME) { tree op0 = gimple_assign_rhs1 (stmt); - tree valueized = top_valueize (op0); + tree valueized = valueize_name (op0); if (!valueized || op0 == valueized) - return false; + return -1; res_op->set_op (TREE_CODE (op0), type, valueized); - return true; + return 0; } break; case GIMPLE_UNARY_RHS: { tree rhs1 = gimple_assign_rhs1 (stmt); - bool valueized = false; - rhs1 = do_valueize (rhs1, top_valueize, valueized); - res_op->set_op (code, type, rhs1); - return (gimple_resimplify1 (seq, res_op, valueize) - || valueized); + res_op->set_op (code, type, valueize_op (rhs1)); + return 1; } case GIMPLE_BINARY_RHS: { - tree rhs1 = gimple_assign_rhs1 (stmt); - tree rhs2 = gimple_assign_rhs2 (stmt); - bool valueized = false; - rhs1 = do_valueize (rhs1, top_valueize, valueized); - rhs2 = do_valueize (rhs2, top_valueize, valueized); + tree rhs1 = valueize_op (gimple_assign_rhs1 (stmt)); + tree rhs2 = valueize_op (gimple_assign_rhs2 (stmt)); res_op->set_op (code, type, rhs1, rhs2); - return (gimple_resimplify2 (seq, res_op, valueize) - || valueized); + return 2; } case GIMPLE_TERNARY_RHS: { - bool valueized = false; tree rhs1 = gimple_assign_rhs1 (stmt); - /* If this is a COND_EXPR first try to simplify an - embedded GENERIC condition. */ - if (code == COND_EXPR) - { - if (COMPARISON_CLASS_P (rhs1)) - { - tree lhs = TREE_OPERAND (rhs1, 0); - tree rhs = TREE_OPERAND (rhs1, 1); - lhs = do_valueize (lhs, top_valueize, valueized); - rhs = do_valueize (rhs, top_valueize, valueized); - gimple_match_op res_op2 (res_op->cond, TREE_CODE (rhs1), - TREE_TYPE (rhs1), lhs, rhs); - if ((gimple_resimplify2 (seq, &res_op2, valueize) - || valueized) - && res_op2.code.is_tree_code ()) - { - valueized = true; - if (TREE_CODE_CLASS ((enum tree_code) res_op2.code) - == tcc_comparison) - rhs1 = build2 (res_op2.code, TREE_TYPE (rhs1), - res_op2.ops[0], res_op2.ops[1]); - else if (res_op2.code == SSA_NAME - || res_op2.code == INTEGER_CST - || res_op2.code == VECTOR_CST) - rhs1 = res_op2.ops[0]; - else - valueized = false; - } - } - } - tree rhs2 = gimple_assign_rhs2 (stmt); - tree rhs3 = gimple_assign_rhs3 (stmt); - rhs1 = do_valueize (rhs1, top_valueize, valueized); - rhs2 = do_valueize (rhs2, top_valueize, valueized); - rhs3 = do_valueize (rhs3, top_valueize, valueized); + if (code == COND_EXPR && COMPARISON_CLASS_P (rhs1)) + rhs1 = valueize_condition (rhs1); + else + rhs1 = valueize_op (rhs1); + tree rhs2 = valueize_op (gimple_assign_rhs2 (stmt)); + tree rhs3 = valueize_op (gimple_assign_rhs3 (stmt)); res_op->set_op (code, type, rhs1, rhs2, rhs3); - return (gimple_resimplify3 (seq, res_op, valueize) - || valueized); + return 3; } default: gcc_unreachable (); @@ -1018,7 +988,6 @@ gimple_simplify (gimple *stmt, gimple_match_op *res_op, gimple_seq *seq, && gimple_call_num_args (stmt) >= 1 && gimple_call_num_args (stmt) <= 5) { - bool valueized = false; combined_fn cfn; if (gimple_call_internal_p (stmt)) cfn = as_combined_fn (gimple_call_internal_fn (stmt)); @@ -1026,17 +995,17 @@ gimple_simplify (gimple *stmt, gimple_match_op *res_op, gimple_seq *seq, { tree fn = gimple_call_fn (stmt); if (!fn) - return false; + return -1; - fn = do_valueize (fn, top_valueize, valueized); + fn = valueize_op (fn); if (TREE_CODE (fn) != ADDR_EXPR || TREE_CODE (TREE_OPERAND (fn, 0)) != FUNCTION_DECL) - return false; + return -1; tree decl = TREE_OPERAND (fn, 0); if (DECL_BUILT_IN_CLASS (decl) != BUILT_IN_NORMAL || !gimple_builtin_call_types_compatible_p (stmt, decl)) - return false; + return -1; cfn = as_combined_fn (DECL_FUNCTION_CODE (decl)); } @@ -1044,56 +1013,122 @@ gimple_simplify (gimple *stmt, gimple_match_op *res_op, gimple_seq *seq, unsigned int num_args = gimple_call_num_args (stmt); res_op->set_op (cfn, TREE_TYPE (gimple_call_lhs (stmt)), num_args); for (unsigned i = 0; i < num_args; ++i) - { - tree arg = gimple_call_arg (stmt, i); - res_op->ops[i] = do_valueize (arg, top_valueize, valueized); - } - if (internal_fn_p (cfn) - && try_conditional_simplification (as_internal_fn (cfn), - res_op, seq, valueize)) - return true; - switch (num_args) - { - case 1: - return (gimple_resimplify1 (seq, res_op, valueize) - || valueized); - case 2: - return (gimple_resimplify2 (seq, res_op, valueize) - || valueized); - case 3: - return (gimple_resimplify3 (seq, res_op, valueize) - || valueized); - case 4: - return (gimple_resimplify4 (seq, res_op, valueize) - || valueized); - case 5: - return (gimple_resimplify5 (seq, res_op, valueize) - || valueized); - default: - gcc_unreachable (); - } + res_op->ops[i] = valueize_op (gimple_call_arg (stmt, i)); + return -2; } break; case GIMPLE_COND: { - tree lhs = gimple_cond_lhs (stmt); - tree rhs = gimple_cond_rhs (stmt); - bool valueized = false; - lhs = do_valueize (lhs, top_valueize, valueized); - rhs = do_valueize (rhs, top_valueize, valueized); + tree lhs = valueize_op (gimple_cond_lhs (stmt)); + tree rhs = valueize_op (gimple_cond_rhs (stmt)); res_op->set_op (gimple_cond_code (stmt), boolean_type_node, lhs, rhs); - return (gimple_resimplify2 (seq, res_op, valueize) - || valueized); + return 2; } default: break; } - return false; + return -1; } +/* Try to describe STMT in RES_OP, returning true on success. + For GIMPLE_CONDs, describe the condition that is being tested. + For GIMPLE_ASSIGNs, describe the rhs of the assignment. + For GIMPLE_CALLs, describe the call. */ + +bool +gimple_extract_op (gimple *stmt, gimple_match_op *res_op) +{ + auto nop = [](tree op) { return op; }; + return gimple_extract (stmt, res_op, nop, nop, nop) != -1; +} + +/* The main STMT based simplification entry. It is used by the fold_stmt + and the fold_stmt_to_constant APIs. */ + +bool +gimple_simplify (gimple *stmt, gimple_match_op *res_op, gimple_seq *seq, + tree (*valueize)(tree), tree (*top_valueize)(tree)) +{ + bool valueized = false; + auto valueize_op = [&](tree op) + { + return do_valueize (op, top_valueize, valueized); + }; + auto valueize_condition = [&](tree op) -> tree + { + bool cond_valueized = false; + tree lhs = do_valueize (TREE_OPERAND (op, 0), top_valueize, + cond_valueized); + tree rhs = do_valueize (TREE_OPERAND (op, 1), top_valueize, + cond_valueized); + gimple_match_op res_op2 (res_op->cond, TREE_CODE (op), + TREE_TYPE (op), lhs, rhs); + if ((gimple_resimplify2 (seq, &res_op2, valueize) + || cond_valueized) + && res_op2.code.is_tree_code ()) + { + if (TREE_CODE_CLASS ((tree_code) res_op2.code) == tcc_comparison) + { + valueized = true; + return build2 (res_op2.code, TREE_TYPE (op), + res_op2.ops[0], res_op2.ops[1]); + } + else if (res_op2.code == SSA_NAME + || res_op2.code == INTEGER_CST + || res_op2.code == VECTOR_CST) + { + valueized = true; + return res_op2.ops[0]; + } + } + return valueize_op (op); + }; + auto valueize_name = [&](tree op) + { + return top_valueize ? top_valueize (op) : op; + }; + + int res = gimple_extract (stmt, res_op, valueize_op, valueize_condition, + valueize_name); + if (res == -1) + return false; + + if (res == -2) + { + combined_fn cfn = combined_fn (res_op->code); + if (internal_fn_p (cfn) + && try_conditional_simplification (as_internal_fn (cfn), + res_op, seq, valueize)) + return true; + res = res_op->num_ops; + } + + switch (res) + { + case 0: + return valueized; + case 1: + return (gimple_resimplify1 (seq, res_op, valueize) + || valueized); + case 2: + return (gimple_resimplify2 (seq, res_op, valueize) + || valueized); + case 3: + return (gimple_resimplify3 (seq, res_op, valueize) + || valueized); + case 4: + return (gimple_resimplify4 (seq, res_op, valueize) + || valueized); + case 5: + return (gimple_resimplify5 (seq, res_op, valueize) + || valueized); + default: + gcc_unreachable (); + } +} /* Helper for the autogenerated code, valueize OP. */ diff --git a/gcc/gimple-match.h b/gcc/gimple-match.h index 2d4ea476076..15a0f584db7 100644 --- a/gcc/gimple-match.h +++ b/gcc/gimple-match.h @@ -333,6 +333,7 @@ gimple_simplified_result_is_gimple_val (const gimple_match_op *op) extern tree (*mprts_hook) (gimple_match_op *); +bool gimple_extract_op (gimple *, gimple_match_op *); bool gimple_simplify (gimple *, gimple_match_op *, gimple_seq *, tree (*)(tree), tree (*)(tree)); tree maybe_push_res_to_seq (gimple_match_op *, gimple_seq *, From patchwork Wed Nov 10 12:45:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 47401 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id BB63E3857C76 for ; Wed, 10 Nov 2021 12:46:54 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org BB63E3857C76 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1636548414; bh=K4jCMDuOT+XM3P3HD031e8pAxQXz0u+9b/LPpIutucA=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=wm6XkAUm+9h3bLppn9KiuIjexZ2iIC46pednSPRw2lMbNAoU6k/+Fux6pmmiErXjS CVnoBhna84IHQoys/rFe3G4Qm2X/rTMpdbUUm7A0bROCTcH/GMku5QTnp0MahH1D6F 7bP0scdtxhi4zsfIPRJcoJeGkPabXb5RnBbYPf9I= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 82B3D3857805 for ; Wed, 10 Nov 2021 12:45:36 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 82B3D3857805 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 34FAF101E for ; Wed, 10 Nov 2021 04:45:36 -0800 (PST) Received: from localhost (unknown [10.32.98.88]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D00F13F5A1 for ; Wed, 10 Nov 2021 04:45:35 -0800 (PST) To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [PATCH 3/5] gimple-match: Make code_helper conversions explicit Date: Wed, 10 Nov 2021 12:45:34 +0000 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Sandiford via Gcc-patches From: Richard Sandiford Reply-To: Richard Sandiford Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" code_helper provides conversions to tree_code and combined_fn. Now that the codebase is C++11, we can mark these conversions as explicit. This avoids accidentally using code_helpers with functions that take tree_codes, which would previously entail a hidden unchecked conversion. Tested on aarch64-linux-gnu and x86_64-linux-gnu. OK to install? Richard gcc/ * gimple-match.h (code_helper): Provide == and != overloads. (code_helper::operator tree_code): Make explicit. (code_helper::operator combined_fn): Likewise. * gimple-match-head.c (convert_conditional_op): Use explicit conversions where necessary. (gimple_resimplify1, gimple_resimplify2, gimple_resimplify3): Likewise. (maybe_push_res_to_seq, gimple_simplify): Likewise. * gimple-fold.c (replace_stmt_with_simplification): Likewise. --- gcc/gimple-fold.c | 18 ++++++++------- gcc/gimple-match-head.c | 51 ++++++++++++++++++++++------------------- gcc/gimple-match.h | 9 ++++++-- 3 files changed, 45 insertions(+), 33 deletions(-) diff --git a/gcc/gimple-fold.c b/gcc/gimple-fold.c index 6e25a7c05db..9daf2cc590c 100644 --- a/gcc/gimple-fold.c +++ b/gcc/gimple-fold.c @@ -5828,18 +5828,19 @@ replace_stmt_with_simplification (gimple_stmt_iterator *gsi, if (gcond *cond_stmt = dyn_cast (stmt)) { gcc_assert (res_op->code.is_tree_code ()); - if (TREE_CODE_CLASS ((enum tree_code) res_op->code) == tcc_comparison + auto code = tree_code (res_op->code); + if (TREE_CODE_CLASS (code) == tcc_comparison /* GIMPLE_CONDs condition may not throw. */ && (!flag_exceptions || !cfun->can_throw_non_call_exceptions - || !operation_could_trap_p (res_op->code, + || !operation_could_trap_p (code, FLOAT_TYPE_P (TREE_TYPE (ops[0])), false, NULL_TREE))) - gimple_cond_set_condition (cond_stmt, res_op->code, ops[0], ops[1]); - else if (res_op->code == SSA_NAME) + gimple_cond_set_condition (cond_stmt, code, ops[0], ops[1]); + else if (code == SSA_NAME) gimple_cond_set_condition (cond_stmt, NE_EXPR, ops[0], build_zero_cst (TREE_TYPE (ops[0]))); - else if (res_op->code == INTEGER_CST) + else if (code == INTEGER_CST) { if (integer_zerop (ops[0])) gimple_cond_make_false (cond_stmt); @@ -5870,11 +5871,12 @@ replace_stmt_with_simplification (gimple_stmt_iterator *gsi, else if (is_gimple_assign (stmt) && res_op->code.is_tree_code ()) { + auto code = tree_code (res_op->code); if (!inplace - || gimple_num_ops (stmt) > get_gimple_rhs_num_ops (res_op->code)) + || gimple_num_ops (stmt) > get_gimple_rhs_num_ops (code)) { maybe_build_generic_op (res_op); - gimple_assign_set_rhs_with_ops (gsi, res_op->code, + gimple_assign_set_rhs_with_ops (gsi, code, res_op->op_or_null (0), res_op->op_or_null (1), res_op->op_or_null (2)); @@ -5891,7 +5893,7 @@ replace_stmt_with_simplification (gimple_stmt_iterator *gsi, } } else if (res_op->code.is_fn_code () - && gimple_call_combined_fn (stmt) == res_op->code) + && gimple_call_combined_fn (stmt) == combined_fn (res_op->code)) { gcc_assert (num_ops == gimple_call_num_args (stmt)); for (unsigned int i = 0; i < num_ops; ++i) diff --git a/gcc/gimple-match-head.c b/gcc/gimple-match-head.c index 4c6e0883ba4..d4d7d767075 100644 --- a/gcc/gimple-match-head.c +++ b/gcc/gimple-match-head.c @@ -96,7 +96,7 @@ convert_conditional_op (gimple_match_op *orig_op, ifn = get_conditional_internal_fn ((tree_code) orig_op->code); else { - combined_fn cfn = orig_op->code; + auto cfn = combined_fn (orig_op->code); if (!internal_fn_p (cfn)) return false; ifn = get_conditional_internal_fn (as_internal_fn (cfn)); @@ -206,10 +206,10 @@ gimple_resimplify1 (gimple_seq *seq, gimple_match_op *res_op, tree tem = NULL_TREE; if (res_op->code.is_tree_code ()) { - tree_code code = res_op->code; + auto code = tree_code (res_op->code); if (IS_EXPR_CODE_CLASS (TREE_CODE_CLASS (code)) && TREE_CODE_LENGTH (code) == 1) - tem = const_unop (res_op->code, res_op->type, res_op->ops[0]); + tem = const_unop (code, res_op->type, res_op->ops[0]); } else tem = fold_const_call (combined_fn (res_op->code), res_op->type, @@ -272,10 +272,10 @@ gimple_resimplify2 (gimple_seq *seq, gimple_match_op *res_op, tree tem = NULL_TREE; if (res_op->code.is_tree_code ()) { - tree_code code = res_op->code; + auto code = tree_code (res_op->code); if (IS_EXPR_CODE_CLASS (TREE_CODE_CLASS (code)) && TREE_CODE_LENGTH (code) == 2) - tem = const_binop (res_op->code, res_op->type, + tem = const_binop (code, res_op->type, res_op->ops[0], res_op->ops[1]); } else @@ -294,15 +294,18 @@ gimple_resimplify2 (gimple_seq *seq, gimple_match_op *res_op, /* Canonicalize operand order. */ bool canonicalized = false; - if (res_op->code.is_tree_code () - && (TREE_CODE_CLASS ((enum tree_code) res_op->code) == tcc_comparison - || commutative_tree_code (res_op->code)) - && tree_swap_operands_p (res_op->ops[0], res_op->ops[1])) + if (res_op->code.is_tree_code ()) { - std::swap (res_op->ops[0], res_op->ops[1]); - if (TREE_CODE_CLASS ((enum tree_code) res_op->code) == tcc_comparison) - res_op->code = swap_tree_comparison (res_op->code); - canonicalized = true; + auto code = tree_code (res_op->code); + if ((TREE_CODE_CLASS (code) == tcc_comparison + || commutative_tree_code (code)) + && tree_swap_operands_p (res_op->ops[0], res_op->ops[1])) + { + std::swap (res_op->ops[0], res_op->ops[1]); + if (TREE_CODE_CLASS (code) == tcc_comparison) + res_op->code = swap_tree_comparison (code); + canonicalized = true; + } } /* Limit recursion, see gimple_resimplify1. */ @@ -350,10 +353,10 @@ gimple_resimplify3 (gimple_seq *seq, gimple_match_op *res_op, tree tem = NULL_TREE; if (res_op->code.is_tree_code ()) { - tree_code code = res_op->code; + auto code = tree_code (res_op->code); if (IS_EXPR_CODE_CLASS (TREE_CODE_CLASS (code)) && TREE_CODE_LENGTH (code) == 3) - tem = fold_ternary/*_to_constant*/ (res_op->code, res_op->type, + tem = fold_ternary/*_to_constant*/ (code, res_op->type, res_op->ops[0], res_op->ops[1], res_op->ops[2]); } @@ -374,7 +377,7 @@ gimple_resimplify3 (gimple_seq *seq, gimple_match_op *res_op, /* Canonicalize operand order. */ bool canonicalized = false; if (res_op->code.is_tree_code () - && commutative_ternary_tree_code (res_op->code) + && commutative_ternary_tree_code (tree_code (res_op->code)) && tree_swap_operands_p (res_op->ops[0], res_op->ops[1])) { std::swap (res_op->ops[0], res_op->ops[1]); @@ -599,6 +602,7 @@ maybe_push_res_to_seq (gimple_match_op *res_op, gimple_seq *seq, tree res) if (res_op->code.is_tree_code ()) { + auto code = tree_code (res_op->code); if (!res) { if (gimple_in_ssa_p (cfun)) @@ -607,7 +611,7 @@ maybe_push_res_to_seq (gimple_match_op *res_op, gimple_seq *seq, tree res) res = create_tmp_reg (res_op->type); } maybe_build_generic_op (res_op); - gimple *new_stmt = gimple_build_assign (res, res_op->code, + gimple *new_stmt = gimple_build_assign (res, code, res_op->op_or_null (0), res_op->op_or_null (1), res_op->op_or_null (2)); @@ -617,7 +621,7 @@ maybe_push_res_to_seq (gimple_match_op *res_op, gimple_seq *seq, tree res) else { gcc_assert (num_ops != 0); - combined_fn fn = res_op->code; + auto fn = combined_fn (res_op->code); gcall *new_stmt = NULL; if (internal_fn_p (fn)) { @@ -1070,15 +1074,16 @@ gimple_simplify (gimple *stmt, gimple_match_op *res_op, gimple_seq *seq, || cond_valueized) && res_op2.code.is_tree_code ()) { - if (TREE_CODE_CLASS ((tree_code) res_op2.code) == tcc_comparison) + auto code = tree_code (res_op2.code); + if (TREE_CODE_CLASS (code) == tcc_comparison) { valueized = true; - return build2 (res_op2.code, TREE_TYPE (op), + return build2 (code, TREE_TYPE (op), res_op2.ops[0], res_op2.ops[1]); } - else if (res_op2.code == SSA_NAME - || res_op2.code == INTEGER_CST - || res_op2.code == VECTOR_CST) + else if (code == SSA_NAME + || code == INTEGER_CST + || code == VECTOR_CST) { valueized = true; return res_op2.ops[0]; diff --git a/gcc/gimple-match.h b/gcc/gimple-match.h index 15a0f584db7..1b9dc3851c2 100644 --- a/gcc/gimple-match.h +++ b/gcc/gimple-match.h @@ -31,11 +31,16 @@ public: code_helper () {} code_helper (tree_code code) : rep ((int) code) {} code_helper (combined_fn fn) : rep (-(int) fn) {} - operator tree_code () const { return (tree_code) rep; } - operator combined_fn () const { return (combined_fn) -rep; } + explicit operator tree_code () const { return (tree_code) rep; } + explicit operator combined_fn () const { return (combined_fn) -rep; } bool is_tree_code () const { return rep > 0; } bool is_fn_code () const { return rep < 0; } int get_rep () const { return rep; } + bool operator== (const code_helper &other) { return rep == other.rep; } + bool operator!= (const code_helper &other) { return rep != other.rep; } + bool operator== (tree_code c) { return rep == code_helper (c).rep; } + bool operator!= (tree_code c) { return rep != code_helper (c).rep; } + private: int rep; }; From patchwork Wed Nov 10 12:46:23 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 47402 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 6FD8F3857817 for ; Wed, 10 Nov 2021 12:47:52 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 6FD8F3857817 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1636548472; bh=PM5J54+vn/IoxUnpcevz3SSZF0o8DNKBcbqX4poQul8=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=qYvn5Hte5/r39iColtiGr8JF99PkMRkEJVkIAdNqn3GE2j2ZIVgyz4nyxxXRrpsj5 UXOiosxB+p8HfqH+3L1QgXGHxh6vrpjR1YS4W3gron+rp3a3s2HSJuDrrbdnpZ/hkT Ks4R8xZv+oKgqzaxVgJ84dBD6ANS1NpiZInXJb+s= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 88CFC3857805 for ; Wed, 10 Nov 2021 12:46:25 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 88CFC3857805 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 25D77101E for ; Wed, 10 Nov 2021 04:46:25 -0800 (PST) Received: from localhost (unknown [10.32.98.88]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 87A803F5A1 for ; Wed, 10 Nov 2021 04:46:24 -0800 (PST) To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [PATCH 4/5] vect: Make reduction code handle calls Date: Wed, 10 Nov 2021 12:46:23 +0000 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Sandiford via Gcc-patches From: Richard Sandiford Reply-To: Richard Sandiford Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" This patch extends the reduction code to handle calls. So far it's a structural change only; a later patch adds support for specific function reductions. Most of the patch consists of using code_helper and gimple_match_op to describe the reduction operations. The other main change is that vectorizable_call now needs to handle fully-predicated reductions. Tested on aarch64-linux-gnu and x86_64-linux-gnu. OK to install? Richard gcc/ * builtins.h (associated_internal_fn): Declare overload that takes a (combined_cfn, return type) pair. * builtins.c (associated_internal_fn): Split new overload out of original fndecl version. Also provide an overload that takes a (combined_cfn, return type) pair. * internal-fn.h (commutative_binary_fn_p): Declare. (associative_binary_fn_p): Likewise. * internal-fn.c (commutative_binary_fn_p): New function, split out from... (first_commutative_argument): ...here. (associative_binary_fn_p): New function. * gimple-match.h (code_helper): Add a constructor that takes internal functions. (commutative_binary_op_p): Declare. (associative_binary_op_p): Likewise. (canonicalize_code): Likewise. (directly_supported_p): Likewise. (get_conditional_internal_fn): Likewise. (gimple_build): New overload that takes a code_helper. * gimple-fold.c (gimple_build): Likewise. * gimple-match-head.c (commutative_binary_op_p): New function. (associative_binary_op_p): Likewise. (canonicalize_code): Likewise. (directly_supported_p): Likewise. (get_conditional_internal_fn): Likewise. * tree-vectorizer.h: Include gimple-match.h. (neutral_op_for_reduction): Take a code_helper instead of a tree_code. (needs_fold_left_reduction_p): Likewise. (reduction_fn_for_scalar_code): Likewise. (vect_can_vectorize_without_simd_p): Declare a nNew overload that takes a code_helper. * tree-vect-loop.c: Include case-cfn-macros.h. (fold_left_reduction_fn): Take a code_helper instead of a tree_code. (reduction_fn_for_scalar_code): Likewise. (neutral_op_for_reduction): Likewise. (needs_fold_left_reduction_p): Likewise. (use_mask_by_cond_expr_p): Likewise. (build_vect_cond_expr): Likewise. (vect_create_partial_epilog): Likewise. Use gimple_build rather than gimple_build_assign. (check_reduction_path): Handle calls and operate on code_helpers rather than tree_codes. (vect_is_simple_reduction): Likewise. (vect_model_reduction_cost): Likewise. (vect_find_reusable_accumulator): Likewise. (vect_create_epilog_for_reduction): Likewise. (vect_transform_cycle_phi): Likewise. (vectorizable_reduction): Likewise. Make more use of lane_reduc_code_p. (vect_transform_reduction): Use gimple_extract_op but expect a tree_code for now. (vect_can_vectorize_without_simd_p): New overload that takes a code_helper. * tree-vect-stmts.c (vectorizable_call): Handle reductions in fully-masked loops. * tree-vect-patterns.c (vect_mark_pattern_stmts): Use gimple_extract_op when updating STMT_VINFO_REDUC_IDX. --- gcc/builtins.c | 46 ++++- gcc/builtins.h | 1 + gcc/gimple-fold.c | 9 + gcc/gimple-match-head.c | 70 +++++++ gcc/gimple-match.h | 20 ++ gcc/internal-fn.c | 46 ++++- gcc/internal-fn.h | 2 + gcc/tree-vect-loop.c | 420 +++++++++++++++++++-------------------- gcc/tree-vect-patterns.c | 23 ++- gcc/tree-vect-stmts.c | 66 ++++-- gcc/tree-vectorizer.h | 10 +- 11 files changed, 455 insertions(+), 258 deletions(-) diff --git a/gcc/builtins.c b/gcc/builtins.c index 384864bfb3a..03829c03a5a 100644 --- a/gcc/builtins.c +++ b/gcc/builtins.c @@ -2139,17 +2139,17 @@ mathfn_built_in_type (combined_fn fn) #undef SEQ_OF_CASE_MATHFN } -/* If BUILT_IN_NORMAL function FNDECL has an associated internal function, - return its code, otherwise return IFN_LAST. Note that this function - only tests whether the function is defined in internals.def, not whether - it is actually available on the target. */ +/* Check whether there is an internal function associated with function FN + and return type RETURN_TYPE. Return the function if so, otherwise return + IFN_LAST. -internal_fn -associated_internal_fn (tree fndecl) + Note that this function only tests whether the function is defined in + internals.def, not whether it is actually available on the target. */ + +static internal_fn +associated_internal_fn (built_in_function fn, tree return_type) { - gcc_checking_assert (DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_NORMAL); - tree return_type = TREE_TYPE (TREE_TYPE (fndecl)); - switch (DECL_FUNCTION_CODE (fndecl)) + switch (fn) { #define DEF_INTERNAL_FLT_FN(NAME, FLAGS, OPTAB, TYPE) \ CASE_FLT_FN (BUILT_IN_##NAME): return IFN_##NAME; @@ -2177,6 +2177,34 @@ associated_internal_fn (tree fndecl) } } +/* If BUILT_IN_NORMAL function FNDECL has an associated internal function, + return its code, otherwise return IFN_LAST. Note that this function + only tests whether the function is defined in internals.def, not whether + it is actually available on the target. */ + +internal_fn +associated_internal_fn (tree fndecl) +{ + gcc_checking_assert (DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_NORMAL); + return associated_internal_fn (DECL_FUNCTION_CODE (fndecl), + TREE_TYPE (TREE_TYPE (fndecl))); +} + +/* Check whether there is an internal function associated with function CFN + and return type RETURN_TYPE. Return the function if so, otherwise return + IFN_LAST. + + Note that this function only tests whether the function is defined in + internals.def, not whether it is actually available on the target. */ + +internal_fn +associated_internal_fn (combined_fn cfn, tree return_type) +{ + if (internal_fn_p (cfn)) + return as_internal_fn (cfn); + return associated_internal_fn (as_builtin_fn (cfn), return_type); +} + /* If CALL is a call to a BUILT_IN_NORMAL function that could be replaced on the current target by a call to an internal function, return the code of that internal function, otherwise return IFN_LAST. The caller diff --git a/gcc/builtins.h b/gcc/builtins.h index 5e4d86e9c37..c99670b12f1 100644 --- a/gcc/builtins.h +++ b/gcc/builtins.h @@ -148,6 +148,7 @@ extern char target_percent_s_newline[4]; extern bool target_char_cst_p (tree t, char *p); extern rtx get_memory_rtx (tree exp, tree len); +extern internal_fn associated_internal_fn (combined_fn, tree); extern internal_fn associated_internal_fn (tree); extern internal_fn replacement_internal_fn (gcall *); diff --git a/gcc/gimple-fold.c b/gcc/gimple-fold.c index 9daf2cc590c..a937f130815 100644 --- a/gcc/gimple-fold.c +++ b/gcc/gimple-fold.c @@ -8808,6 +8808,15 @@ gimple_build (gimple_seq *seq, location_t loc, combined_fn fn, return res; } +tree +gimple_build (gimple_seq *seq, location_t loc, code_helper code, + tree type, tree op0, tree op1) +{ + if (code.is_tree_code ()) + return gimple_build (seq, loc, tree_code (code), type, op0, op1); + return gimple_build (seq, loc, combined_fn (code), type, op0, op1); +} + /* Build the conversion (TYPE) OP with a result of type TYPE with location LOC if such conversion is neccesary in GIMPLE, simplifying it first. diff --git a/gcc/gimple-match-head.c b/gcc/gimple-match-head.c index d4d7d767075..4558a3db5fc 100644 --- a/gcc/gimple-match-head.c +++ b/gcc/gimple-match-head.c @@ -1304,3 +1304,73 @@ optimize_successive_divisions_p (tree divisor, tree inner_div) } return true; } + +/* If CODE, operating on TYPE, represents a built-in function that has an + associated internal function, return the associated internal function, + otherwise return CODE. This function does not check whether the + internal function is supported, only that it exists. */ + +code_helper +canonicalize_code (code_helper code, tree type) +{ + if (code.is_fn_code ()) + return associated_internal_fn (combined_fn (code), type); + return code; +} + +/* Return true if CODE is a binary operation that is commutative when + operating on type TYPE. */ + +bool +commutative_binary_op_p (code_helper code, tree type) +{ + if (code.is_tree_code ()) + return commutative_tree_code (tree_code (code)); + auto cfn = combined_fn (code); + return commutative_binary_fn_p (associated_internal_fn (cfn, type)); +} + +/* Return true if CODE is a binary operation that is associative when + operating on type TYPE. */ + +bool +associative_binary_op_p (code_helper code, tree type) +{ + if (code.is_tree_code ()) + return associative_tree_code (tree_code (code)); + auto cfn = combined_fn (code); + return associative_binary_fn_p (associated_internal_fn (cfn, type)); +} + +/* Return true if the target directly supports operation CODE on type TYPE. + QUERY_TYPE acts as for optab_for_tree_code. */ + +bool +directly_supported_p (code_helper code, tree type, optab_subtype query_type) +{ + if (code.is_tree_code ()) + { + direct_optab optab = optab_for_tree_code (tree_code (code), type, + query_type); + return (optab != unknown_optab + && optab_handler (optab, TYPE_MODE (type)) != CODE_FOR_nothing); + } + gcc_assert (query_type == optab_default + || (query_type == optab_vector && VECTOR_TYPE_P (type)) + || (query_type == optab_scalar && !VECTOR_TYPE_P (type))); + internal_fn ifn = associated_internal_fn (combined_fn (code), type); + return (direct_internal_fn_p (ifn) + && direct_internal_fn_supported_p (ifn, type, OPTIMIZE_FOR_SPEED)); +} + +/* A wrapper around the internal-fn.c versions of get_conditional_internal_fn + for a code_helper CODE operating on type TYPE. */ + +internal_fn +get_conditional_internal_fn (code_helper code, tree type) +{ + if (code.is_tree_code ()) + return get_conditional_internal_fn (tree_code (code)); + auto cfn = combined_fn (code); + return get_conditional_internal_fn (associated_internal_fn (cfn, type)); +} diff --git a/gcc/gimple-match.h b/gcc/gimple-match.h index 1b9dc3851c2..6d24a8a2378 100644 --- a/gcc/gimple-match.h +++ b/gcc/gimple-match.h @@ -31,6 +31,7 @@ public: code_helper () {} code_helper (tree_code code) : rep ((int) code) {} code_helper (combined_fn fn) : rep (-(int) fn) {} + code_helper (internal_fn fn) : rep (-(int) as_combined_fn (fn)) {} explicit operator tree_code () const { return (tree_code) rep; } explicit operator combined_fn () const { return (combined_fn) -rep; } bool is_tree_code () const { return rep > 0; } @@ -346,4 +347,23 @@ tree maybe_push_res_to_seq (gimple_match_op *, gimple_seq *, void maybe_build_generic_op (gimple_match_op *); +bool commutative_binary_op_p (code_helper, tree); +bool associative_binary_op_p (code_helper, tree); +code_helper canonicalize_code (code_helper, tree); + +#ifdef GCC_OPTABS_TREE_H +bool directly_supported_p (code_helper, tree, optab_subtype = optab_default); +#endif + +internal_fn get_conditional_internal_fn (code_helper, tree); + +extern tree gimple_build (gimple_seq *, location_t, + code_helper, tree, tree, tree); +inline tree +gimple_build (gimple_seq *seq, code_helper code, tree type, tree op0, + tree op1) +{ + return gimple_build (seq, UNKNOWN_LOCATION, code, type, op0, op1); +} + #endif /* GCC_GIMPLE_MATCH_H */ diff --git a/gcc/internal-fn.c b/gcc/internal-fn.c index da7d8355214..7b13db6dfe3 100644 --- a/gcc/internal-fn.c +++ b/gcc/internal-fn.c @@ -3815,6 +3815,43 @@ direct_internal_fn_supported_p (gcall *stmt, optimization_type opt_type) return direct_internal_fn_supported_p (fn, types, opt_type); } +/* Return true if FN is a commutative binary operation. */ + +bool +commutative_binary_fn_p (internal_fn fn) +{ + switch (fn) + { + case IFN_AVG_FLOOR: + case IFN_AVG_CEIL: + case IFN_MULH: + case IFN_MULHS: + case IFN_MULHRS: + case IFN_FMIN: + case IFN_FMAX: + return true; + + default: + return false; + } +} + +/* Return true if FN is an associative binary operation. */ + +bool +associative_binary_fn_p (internal_fn fn) +{ + switch (fn) + { + case IFN_FMIN: + case IFN_FMAX: + return true; + + default: + return false; + } +} + /* If FN is commutative in two consecutive arguments, return the index of the first, otherwise return -1. */ @@ -3827,13 +3864,6 @@ first_commutative_argument (internal_fn fn) case IFN_FMS: case IFN_FNMA: case IFN_FNMS: - case IFN_AVG_FLOOR: - case IFN_AVG_CEIL: - case IFN_MULH: - case IFN_MULHS: - case IFN_MULHRS: - case IFN_FMIN: - case IFN_FMAX: return 0; case IFN_COND_ADD: @@ -3852,7 +3882,7 @@ first_commutative_argument (internal_fn fn) return 1; default: - return -1; + return commutative_binary_fn_p (fn) ? 0 : -1; } } diff --git a/gcc/internal-fn.h b/gcc/internal-fn.h index 19d0f849a5a..82ef4b0d792 100644 --- a/gcc/internal-fn.h +++ b/gcc/internal-fn.h @@ -206,6 +206,8 @@ direct_internal_fn_supported_p (internal_fn fn, tree type0, tree type1, opt_type); } +extern bool commutative_binary_fn_p (internal_fn); +extern bool associative_binary_fn_p (internal_fn); extern int first_commutative_argument (internal_fn); extern bool set_edom_supported_p (void); diff --git a/gcc/tree-vect-loop.c b/gcc/tree-vect-loop.c index 1cd5dbcb6f7..cae895a88f2 100644 --- a/gcc/tree-vect-loop.c +++ b/gcc/tree-vect-loop.c @@ -54,6 +54,7 @@ along with GCC; see the file COPYING3. If not see #include "tree-vector-builder.h" #include "vec-perm-indices.h" #include "tree-eh.h" +#include "case-cfn-macros.h" /* Loop Vectorization Pass. @@ -3125,17 +3126,14 @@ vect_analyze_loop (class loop *loop, vec_info_shared *shared) it in *REDUC_FN if so. */ static bool -fold_left_reduction_fn (tree_code code, internal_fn *reduc_fn) +fold_left_reduction_fn (code_helper code, internal_fn *reduc_fn) { - switch (code) + if (code == PLUS_EXPR) { - case PLUS_EXPR: *reduc_fn = IFN_FOLD_LEFT_PLUS; return true; - - default: - return false; } + return false; } /* Function reduction_fn_for_scalar_code @@ -3152,21 +3150,22 @@ fold_left_reduction_fn (tree_code code, internal_fn *reduc_fn) Return FALSE if CODE currently cannot be vectorized as reduction. */ bool -reduction_fn_for_scalar_code (enum tree_code code, internal_fn *reduc_fn) +reduction_fn_for_scalar_code (code_helper code, internal_fn *reduc_fn) { - switch (code) - { + if (code.is_tree_code ()) + switch (tree_code (code)) + { case MAX_EXPR: - *reduc_fn = IFN_REDUC_MAX; - return true; + *reduc_fn = IFN_REDUC_MAX; + return true; case MIN_EXPR: - *reduc_fn = IFN_REDUC_MIN; - return true; + *reduc_fn = IFN_REDUC_MIN; + return true; case PLUS_EXPR: - *reduc_fn = IFN_REDUC_PLUS; - return true; + *reduc_fn = IFN_REDUC_PLUS; + return true; case BIT_AND_EXPR: *reduc_fn = IFN_REDUC_AND; @@ -3182,12 +3181,13 @@ reduction_fn_for_scalar_code (enum tree_code code, internal_fn *reduc_fn) case MULT_EXPR: case MINUS_EXPR: - *reduc_fn = IFN_LAST; - return true; + *reduc_fn = IFN_LAST; + return true; default: - return false; + break; } + return false; } /* If there is a neutral value X such that a reduction would not be affected @@ -3197,32 +3197,35 @@ reduction_fn_for_scalar_code (enum tree_code code, internal_fn *reduc_fn) then INITIAL_VALUE is that value, otherwise it is null. */ tree -neutral_op_for_reduction (tree scalar_type, tree_code code, tree initial_value) +neutral_op_for_reduction (tree scalar_type, code_helper code, + tree initial_value) { - switch (code) - { - case WIDEN_SUM_EXPR: - case DOT_PROD_EXPR: - case SAD_EXPR: - case PLUS_EXPR: - case MINUS_EXPR: - case BIT_IOR_EXPR: - case BIT_XOR_EXPR: - return build_zero_cst (scalar_type); + if (code.is_tree_code ()) + switch (tree_code (code)) + { + case WIDEN_SUM_EXPR: + case DOT_PROD_EXPR: + case SAD_EXPR: + case PLUS_EXPR: + case MINUS_EXPR: + case BIT_IOR_EXPR: + case BIT_XOR_EXPR: + return build_zero_cst (scalar_type); - case MULT_EXPR: - return build_one_cst (scalar_type); + case MULT_EXPR: + return build_one_cst (scalar_type); - case BIT_AND_EXPR: - return build_all_ones_cst (scalar_type); + case BIT_AND_EXPR: + return build_all_ones_cst (scalar_type); - case MAX_EXPR: - case MIN_EXPR: - return initial_value; + case MAX_EXPR: + case MIN_EXPR: + return initial_value; - default: - return NULL_TREE; - } + default: + break; + } + return NULL_TREE; } /* Error reporting helper for vect_is_simple_reduction below. GIMPLE statement @@ -3239,26 +3242,27 @@ report_vect_op (dump_flags_t msg_type, gimple *stmt, const char *msg) overflow must wrap. */ bool -needs_fold_left_reduction_p (tree type, tree_code code) +needs_fold_left_reduction_p (tree type, code_helper code) { /* CHECKME: check for !flag_finite_math_only too? */ if (SCALAR_FLOAT_TYPE_P (type)) - switch (code) - { - case MIN_EXPR: - case MAX_EXPR: - return false; + { + if (code.is_tree_code ()) + switch (tree_code (code)) + { + case MIN_EXPR: + case MAX_EXPR: + return false; - default: - return !flag_associative_math; - } + default: + break; + } + return !flag_associative_math; + } if (INTEGRAL_TYPE_P (type)) - { - if (!operation_no_trapping_overflow (type, code)) - return true; - return false; - } + return (!code.is_tree_code () + || !operation_no_trapping_overflow (type, tree_code (code))); if (SAT_FIXED_POINT_TYPE_P (type)) return true; @@ -3272,7 +3276,7 @@ needs_fold_left_reduction_p (tree type, tree_code code) static bool check_reduction_path (dump_user_location_t loc, loop_p loop, gphi *phi, - tree loop_arg, enum tree_code *code, + tree loop_arg, code_helper *code, vec > &path) { auto_bitmap visited; @@ -3347,45 +3351,57 @@ pop: for (unsigned i = 1; i < path.length (); ++i) { gimple *use_stmt = USE_STMT (path[i].second); - tree op = USE_FROM_PTR (path[i].second); - if (! is_gimple_assign (use_stmt) + gimple_match_op op; + if (!gimple_extract_op (use_stmt, &op)) + { + fail = true; + break; + } + unsigned int opi = op.num_ops; + if (gassign *assign = dyn_cast (use_stmt)) + { /* The following make sure we can compute the operand index easily plus it mostly disallows chaining via COND_EXPR condition operands. */ - || (gimple_assign_rhs1_ptr (use_stmt) != path[i].second->use - && (gimple_num_ops (use_stmt) <= 2 - || gimple_assign_rhs2_ptr (use_stmt) != path[i].second->use) - && (gimple_num_ops (use_stmt) <= 3 - || gimple_assign_rhs3_ptr (use_stmt) != path[i].second->use))) + for (opi = 0; opi < op.num_ops; ++opi) + if (gimple_assign_rhs1_ptr (assign) + opi == path[i].second->use) + break; + } + else if (gcall *call = dyn_cast (use_stmt)) + { + for (opi = 0; opi < op.num_ops; ++opi) + if (gimple_call_arg_ptr (call, opi) == path[i].second->use) + break; + } + if (opi == op.num_ops) { fail = true; break; } - tree_code use_code = gimple_assign_rhs_code (use_stmt); - if (use_code == MINUS_EXPR) + op.code = canonicalize_code (op.code, op.type); + if (op.code == MINUS_EXPR) { - use_code = PLUS_EXPR; + op.code = PLUS_EXPR; /* Track whether we negate the reduction value each iteration. */ - if (gimple_assign_rhs2 (use_stmt) == op) + if (op.ops[1] == op.ops[opi]) neg = ! neg; } - if (CONVERT_EXPR_CODE_P (use_code) - && tree_nop_conversion_p (TREE_TYPE (gimple_assign_lhs (use_stmt)), - TREE_TYPE (gimple_assign_rhs1 (use_stmt)))) + if (CONVERT_EXPR_CODE_P (op.code) + && tree_nop_conversion_p (op.type, TREE_TYPE (op.ops[0]))) ; else if (*code == ERROR_MARK) { - *code = use_code; - sign = TYPE_SIGN (TREE_TYPE (gimple_assign_lhs (use_stmt))); + *code = op.code; + sign = TYPE_SIGN (op.type); } - else if (use_code != *code) + else if (op.code != *code) { fail = true; break; } - else if ((use_code == MIN_EXPR - || use_code == MAX_EXPR) - && sign != TYPE_SIGN (TREE_TYPE (gimple_assign_lhs (use_stmt)))) + else if ((op.code == MIN_EXPR + || op.code == MAX_EXPR) + && sign != TYPE_SIGN (op.type)) { fail = true; break; @@ -3397,7 +3413,7 @@ pop: imm_use_iterator imm_iter; gimple *op_use_stmt; unsigned cnt = 0; - FOR_EACH_IMM_USE_STMT (op_use_stmt, imm_iter, op) + FOR_EACH_IMM_USE_STMT (op_use_stmt, imm_iter, op.ops[opi]) if (!is_gimple_debug (op_use_stmt) && (*code != ERROR_MARK || flow_bb_inside_loop_p (loop, gimple_bb (op_use_stmt)))) @@ -3427,7 +3443,7 @@ check_reduction_path (dump_user_location_t loc, loop_p loop, gphi *phi, tree loop_arg, enum tree_code code) { auto_vec > path; - enum tree_code code_; + code_helper code_; return (check_reduction_path (loc, loop, phi, loop_arg, &code_, path) && code_ == code); } @@ -3596,9 +3612,9 @@ vect_is_simple_reduction (loop_vec_info loop_info, stmt_vec_info phi_info, gimple *def1 = SSA_NAME_DEF_STMT (op1); if (gimple_bb (def1) && flow_bb_inside_loop_p (loop, gimple_bb (def_stmt)) - && loop->inner - && flow_bb_inside_loop_p (loop->inner, gimple_bb (def1)) - && is_gimple_assign (def1) + && loop->inner + && flow_bb_inside_loop_p (loop->inner, gimple_bb (def1)) + && (is_gimple_assign (def1) || is_gimple_call (def1)) && is_a (phi_use_stmt) && flow_bb_inside_loop_p (loop->inner, gimple_bb (phi_use_stmt))) { @@ -3615,7 +3631,7 @@ vect_is_simple_reduction (loop_vec_info loop_info, stmt_vec_info phi_info, /* Look for the expression computing latch_def from then loop PHI result. */ auto_vec > path; - enum tree_code code; + code_helper code; if (check_reduction_path (vect_location, loop, phi, latch_def, &code, path)) { @@ -3633,15 +3649,24 @@ vect_is_simple_reduction (loop_vec_info loop_info, stmt_vec_info phi_info, { gimple *stmt = USE_STMT (path[i].second); stmt_vec_info stmt_info = loop_info->lookup_stmt (stmt); - STMT_VINFO_REDUC_IDX (stmt_info) - = path[i].second->use - gimple_assign_rhs1_ptr (stmt); - enum tree_code stmt_code = gimple_assign_rhs_code (stmt); - bool leading_conversion = (CONVERT_EXPR_CODE_P (stmt_code) + gimple_match_op op; + if (!gimple_extract_op (stmt, &op)) + gcc_unreachable (); + if (gassign *assign = dyn_cast (stmt)) + STMT_VINFO_REDUC_IDX (stmt_info) + = path[i].second->use - gimple_assign_rhs1_ptr (assign); + else + { + gcall *call = as_a (stmt); + STMT_VINFO_REDUC_IDX (stmt_info) + = path[i].second->use - gimple_call_arg_ptr (call, 0); + } + bool leading_conversion = (CONVERT_EXPR_CODE_P (op.code) && (i == 1 || i == path.length () - 1)); - if ((stmt_code != code && !leading_conversion) + if ((op.code != code && !leading_conversion) /* We can only handle the final value in epilogue generation for reduction chains. */ - || (i != 1 && !has_single_use (gimple_assign_lhs (stmt)))) + || (i != 1 && !has_single_use (gimple_get_lhs (stmt)))) is_slp_reduc = false; /* For reduction chains we support a trailing/leading conversions. We do not store those in the actual chain. */ @@ -4390,8 +4415,6 @@ vect_model_reduction_cost (loop_vec_info loop_vinfo, int ncopies, stmt_vector_for_cost *cost_vec) { int prologue_cost = 0, epilogue_cost = 0, inside_cost = 0; - enum tree_code code; - optab optab; tree vectype; machine_mode mode; class loop *loop = NULL; @@ -4407,7 +4430,9 @@ vect_model_reduction_cost (loop_vec_info loop_vinfo, mode = TYPE_MODE (vectype); stmt_vec_info orig_stmt_info = vect_orig_stmt (stmt_info); - code = gimple_assign_rhs_code (orig_stmt_info->stmt); + gimple_match_op op; + if (!gimple_extract_op (orig_stmt_info->stmt, &op)) + gcc_unreachable (); if (reduction_type == EXTRACT_LAST_REDUCTION) /* No extra instructions are needed in the prologue. The loop body @@ -4501,20 +4526,16 @@ vect_model_reduction_cost (loop_vec_info loop_vinfo, else { int vec_size_in_bits = tree_to_uhwi (TYPE_SIZE (vectype)); - tree bitsize = - TYPE_SIZE (TREE_TYPE (gimple_assign_lhs (orig_stmt_info->stmt))); + tree bitsize = TYPE_SIZE (op.type); int element_bitsize = tree_to_uhwi (bitsize); int nelements = vec_size_in_bits / element_bitsize; - if (code == COND_EXPR) - code = MAX_EXPR; - - optab = optab_for_tree_code (code, vectype, optab_default); + if (op.code == COND_EXPR) + op.code = MAX_EXPR; /* We have a whole vector shift available. */ - if (optab != unknown_optab - && VECTOR_MODE_P (mode) - && optab_handler (optab, mode) != CODE_FOR_nothing + if (VECTOR_MODE_P (mode) + && directly_supported_p (op.code, vectype) && have_whole_vector_shift (mode)) { /* Final reduction via vector shifts and the reduction operator. @@ -4855,7 +4876,7 @@ vect_find_reusable_accumulator (loop_vec_info loop_vinfo, initialize the accumulator with a neutral value instead. */ if (!operand_equal_p (initial_value, main_adjustment)) return false; - tree_code code = STMT_VINFO_REDUC_CODE (reduc_info); + code_helper code = STMT_VINFO_REDUC_CODE (reduc_info); initial_values[0] = neutral_op_for_reduction (TREE_TYPE (initial_value), code, initial_value); } @@ -4870,7 +4891,7 @@ vect_find_reusable_accumulator (loop_vec_info loop_vinfo, CODE emitting stmts before GSI. Returns a vector def of VECTYPE. */ static tree -vect_create_partial_epilog (tree vec_def, tree vectype, enum tree_code code, +vect_create_partial_epilog (tree vec_def, tree vectype, code_helper code, gimple_seq *seq) { unsigned nunits = TYPE_VECTOR_SUBPARTS (TREE_TYPE (vec_def)).to_constant (); @@ -4953,9 +4974,7 @@ vect_create_partial_epilog (tree vec_def, tree vectype, enum tree_code code, gimple_seq_add_stmt_without_update (seq, epilog_stmt); } - new_temp = make_ssa_name (vectype1); - epilog_stmt = gimple_build_assign (new_temp, code, dst1, dst2); - gimple_seq_add_stmt_without_update (seq, epilog_stmt); + new_temp = gimple_build (seq, code, vectype1, dst1, dst2); } return new_temp; @@ -5032,7 +5051,7 @@ vect_create_epilog_for_reduction (loop_vec_info loop_vinfo, } gphi *reduc_def_stmt = as_a (STMT_VINFO_REDUC_DEF (vect_orig_stmt (stmt_info))->stmt); - enum tree_code code = STMT_VINFO_REDUC_CODE (reduc_info); + code_helper code = STMT_VINFO_REDUC_CODE (reduc_info); internal_fn reduc_fn = STMT_VINFO_REDUC_FN (reduc_info); tree vectype; machine_mode mode; @@ -5699,14 +5718,9 @@ vect_create_epilog_for_reduction (loop_vec_info loop_vinfo, tree vectype1 = get_related_vectype_for_scalar_type (TYPE_MODE (vectype), stype, nunits1); reduce_with_shift = have_whole_vector_shift (mode1); - if (!VECTOR_MODE_P (mode1)) + if (!VECTOR_MODE_P (mode1) + || !directly_supported_p (code, vectype1)) reduce_with_shift = false; - else - { - optab optab = optab_for_tree_code (code, vectype1, optab_default); - if (optab_handler (optab, mode1) == CODE_FOR_nothing) - reduce_with_shift = false; - } /* First reduce the vector to the desired vector size we should do shift reduction on by combining upper and lower halves. */ @@ -5944,7 +5958,7 @@ vect_create_epilog_for_reduction (loop_vec_info loop_vinfo, for (k = 0; k < live_out_stmts.size (); k++) { stmt_vec_info scalar_stmt_info = vect_orig_stmt (live_out_stmts[k]); - scalar_dest = gimple_assign_lhs (scalar_stmt_info->stmt); + scalar_dest = gimple_get_lhs (scalar_stmt_info->stmt); phis.create (3); /* Find the loop-closed-use at the loop exit of the original scalar @@ -6277,7 +6291,7 @@ is_nonwrapping_integer_induction (stmt_vec_info stmt_vinfo, class loop *loop) CODE is the code for the operation. COND_FN is the conditional internal function, if it exists. VECTYPE_IN is the type of the vector input. */ static bool -use_mask_by_cond_expr_p (enum tree_code code, internal_fn cond_fn, +use_mask_by_cond_expr_p (code_helper code, internal_fn cond_fn, tree vectype_in) { if (cond_fn != IFN_LAST @@ -6285,15 +6299,17 @@ use_mask_by_cond_expr_p (enum tree_code code, internal_fn cond_fn, OPTIMIZE_FOR_SPEED)) return false; - switch (code) - { - case DOT_PROD_EXPR: - case SAD_EXPR: - return true; + if (code.is_tree_code ()) + switch (tree_code (code)) + { + case DOT_PROD_EXPR: + case SAD_EXPR: + return true; - default: - return false; - } + default: + break; + } + return false; } /* Insert a conditional expression to enable masked vectorization. CODE is the @@ -6301,10 +6317,10 @@ use_mask_by_cond_expr_p (enum tree_code code, internal_fn cond_fn, mask. GSI is a statement iterator used to place the new conditional expression. */ static void -build_vect_cond_expr (enum tree_code code, tree vop[3], tree mask, +build_vect_cond_expr (code_helper code, tree vop[3], tree mask, gimple_stmt_iterator *gsi) { - switch (code) + switch (tree_code (code)) { case DOT_PROD_EXPR: { @@ -6390,12 +6406,10 @@ vectorizable_reduction (loop_vec_info loop_vinfo, slp_instance slp_node_instance, stmt_vector_for_cost *cost_vec) { - tree scalar_dest; tree vectype_in = NULL_TREE; class loop *loop = LOOP_VINFO_LOOP (loop_vinfo); enum vect_def_type cond_reduc_dt = vect_unknown_def_type; stmt_vec_info cond_stmt_vinfo = NULL; - tree scalar_type; int i; int ncopies; bool single_defuse_cycle = false; @@ -6508,18 +6522,18 @@ vectorizable_reduction (loop_vec_info loop_vinfo, info_for_reduction to work. */ if (STMT_VINFO_LIVE_P (vdef)) STMT_VINFO_REDUC_DEF (def) = phi_info; - gassign *assign = dyn_cast (vdef->stmt); - if (!assign) + gimple_match_op op; + if (!gimple_extract_op (vdef->stmt, &op)) { if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, - "reduction chain includes calls.\n"); + "reduction chain includes unsupported" + " statement type.\n"); return false; } - if (CONVERT_EXPR_CODE_P (gimple_assign_rhs_code (assign))) + if (CONVERT_EXPR_CODE_P (op.code)) { - if (!tree_nop_conversion_p (TREE_TYPE (gimple_assign_lhs (assign)), - TREE_TYPE (gimple_assign_rhs1 (assign)))) + if (!tree_nop_conversion_p (op.type, TREE_TYPE (op.ops[0]))) { if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, @@ -6530,7 +6544,7 @@ vectorizable_reduction (loop_vec_info loop_vinfo, else if (!stmt_info) /* First non-conversion stmt. */ stmt_info = vdef; - reduc_def = gimple_op (vdef->stmt, 1 + STMT_VINFO_REDUC_IDX (vdef)); + reduc_def = op.ops[STMT_VINFO_REDUC_IDX (vdef)]; reduc_chain_length++; if (!stmt_info && slp_node) slp_for_stmt_info = SLP_TREE_CHILDREN (slp_for_stmt_info)[0]; @@ -6588,26 +6602,24 @@ vectorizable_reduction (loop_vec_info loop_vinfo, tree vectype_out = STMT_VINFO_VECTYPE (stmt_info); STMT_VINFO_REDUC_VECTYPE (reduc_info) = vectype_out; - gassign *stmt = as_a (stmt_info->stmt); - enum tree_code code = gimple_assign_rhs_code (stmt); - bool lane_reduc_code_p - = (code == DOT_PROD_EXPR || code == WIDEN_SUM_EXPR || code == SAD_EXPR); - int op_type = TREE_CODE_LENGTH (code); + gimple_match_op op; + if (!gimple_extract_op (stmt_info->stmt, &op)) + gcc_unreachable (); + bool lane_reduc_code_p = (op.code == DOT_PROD_EXPR + || op.code == WIDEN_SUM_EXPR + || op.code == SAD_EXPR); enum optab_subtype optab_query_kind = optab_vector; - if (code == DOT_PROD_EXPR - && TYPE_SIGN (TREE_TYPE (gimple_assign_rhs1 (stmt))) - != TYPE_SIGN (TREE_TYPE (gimple_assign_rhs2 (stmt)))) + if (op.code == DOT_PROD_EXPR + && (TYPE_SIGN (TREE_TYPE (op.ops[0])) + != TYPE_SIGN (TREE_TYPE (op.ops[1])))) optab_query_kind = optab_vector_mixed_sign; - - scalar_dest = gimple_assign_lhs (stmt); - scalar_type = TREE_TYPE (scalar_dest); - if (!POINTER_TYPE_P (scalar_type) && !INTEGRAL_TYPE_P (scalar_type) - && !SCALAR_FLOAT_TYPE_P (scalar_type)) + if (!POINTER_TYPE_P (op.type) && !INTEGRAL_TYPE_P (op.type) + && !SCALAR_FLOAT_TYPE_P (op.type)) return false; /* Do not try to vectorize bit-precision reductions. */ - if (!type_has_mode_precision_p (scalar_type)) + if (!type_has_mode_precision_p (op.type)) return false; /* For lane-reducing ops we're reducing the number of reduction PHIs @@ -6626,25 +6638,23 @@ vectorizable_reduction (loop_vec_info loop_vinfo, The last use is the reduction variable. In case of nested cycle this assumption is not true: we use reduc_index to record the index of the reduction variable. */ - slp_tree *slp_op = XALLOCAVEC (slp_tree, op_type); + slp_tree *slp_op = XALLOCAVEC (slp_tree, op.num_ops); /* We need to skip an extra operand for COND_EXPRs with embedded comparison. */ unsigned opno_adjust = 0; - if (code == COND_EXPR - && COMPARISON_CLASS_P (gimple_assign_rhs1 (stmt))) + if (op.code == COND_EXPR && COMPARISON_CLASS_P (op.ops[0])) opno_adjust = 1; - for (i = 0; i < op_type; i++) + for (i = 0; i < (int) op.num_ops; i++) { /* The condition of COND_EXPR is checked in vectorizable_condition(). */ - if (i == 0 && code == COND_EXPR) + if (i == 0 && op.code == COND_EXPR) continue; stmt_vec_info def_stmt_info; enum vect_def_type dt; - tree op; if (!vect_is_simple_use (loop_vinfo, stmt_info, slp_for_stmt_info, - i + opno_adjust, &op, &slp_op[i], &dt, &tem, - &def_stmt_info)) + i + opno_adjust, &op.ops[i], &slp_op[i], &dt, + &tem, &def_stmt_info)) { if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, @@ -6669,13 +6679,13 @@ vectorizable_reduction (loop_vec_info loop_vinfo, < GET_MODE_SIZE (SCALAR_TYPE_MODE (TREE_TYPE (tem)))))) vectype_in = tem; - if (code == COND_EXPR) + if (op.code == COND_EXPR) { /* Record how the non-reduction-def value of COND_EXPR is defined. */ if (dt == vect_constant_def) { cond_reduc_dt = dt; - cond_reduc_val = op; + cond_reduc_val = op.ops[i]; } if (dt == vect_induction_def && def_stmt_info @@ -6845,7 +6855,7 @@ vectorizable_reduction (loop_vec_info loop_vinfo, (and also the same tree-code) when generating the epilog code and when generating the code inside the loop. */ - enum tree_code orig_code = STMT_VINFO_REDUC_CODE (phi_info); + code_helper orig_code = STMT_VINFO_REDUC_CODE (phi_info); STMT_VINFO_REDUC_CODE (reduc_info) = orig_code; vect_reduction_type reduction_type = STMT_VINFO_REDUC_TYPE (reduc_info); @@ -6864,7 +6874,7 @@ vectorizable_reduction (loop_vec_info loop_vinfo, && !REDUC_GROUP_FIRST_ELEMENT (stmt_info) && known_eq (LOOP_VINFO_VECT_FACTOR (loop_vinfo), 1u)) ; - else if (needs_fold_left_reduction_p (scalar_type, orig_code)) + else if (needs_fold_left_reduction_p (op.type, orig_code)) { /* When vectorizing a reduction chain w/o SLP the reduction PHI is not directy used in stmt. */ @@ -6879,8 +6889,8 @@ vectorizable_reduction (loop_vec_info loop_vinfo, STMT_VINFO_REDUC_TYPE (reduc_info) = reduction_type = FOLD_LEFT_REDUCTION; } - else if (!commutative_tree_code (orig_code) - || !associative_tree_code (orig_code)) + else if (!commutative_binary_op_p (orig_code, op.type) + || !associative_binary_op_p (orig_code, op.type)) { if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, @@ -6935,7 +6945,7 @@ vectorizable_reduction (loop_vec_info loop_vinfo, else if (reduction_type == COND_REDUCTION) { int scalar_precision - = GET_MODE_PRECISION (SCALAR_TYPE_MODE (scalar_type)); + = GET_MODE_PRECISION (SCALAR_TYPE_MODE (op.type)); cr_index_scalar_type = make_unsigned_type (scalar_precision); cr_index_vector_type = get_same_sized_vectype (cr_index_scalar_type, vectype_out); @@ -7121,28 +7131,19 @@ vectorizable_reduction (loop_vec_info loop_vinfo, if (single_defuse_cycle || lane_reduc_code_p) { - gcc_assert (code != COND_EXPR); + gcc_assert (op.code != COND_EXPR); /* 4. Supportable by target? */ bool ok = true; /* 4.1. check support for the operation in the loop */ - optab optab = optab_for_tree_code (code, vectype_in, optab_query_kind); - if (!optab) - { - if (dump_enabled_p ()) - dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, - "no optab.\n"); - ok = false; - } - machine_mode vec_mode = TYPE_MODE (vectype_in); - if (ok && optab_handler (optab, vec_mode) == CODE_FOR_nothing) + if (!directly_supported_p (op.code, vectype_in, optab_query_kind)) { if (dump_enabled_p ()) dump_printf (MSG_NOTE, "op not supported by target.\n"); if (maybe_ne (GET_MODE_SIZE (vec_mode), UNITS_PER_WORD) - || !vect_can_vectorize_without_simd_p (code)) + || !vect_can_vectorize_without_simd_p (op.code)) ok = false; else if (dump_enabled_p ()) @@ -7150,7 +7151,7 @@ vectorizable_reduction (loop_vec_info loop_vinfo, } if (vect_emulated_vector_p (vectype_in) - && !vect_can_vectorize_without_simd_p (code)) + && !vect_can_vectorize_without_simd_p (op.code)) { if (dump_enabled_p ()) dump_printf (MSG_NOTE, "using word mode not possible.\n"); @@ -7183,11 +7184,9 @@ vectorizable_reduction (loop_vec_info loop_vinfo, if (slp_node && !(!single_defuse_cycle - && code != DOT_PROD_EXPR - && code != WIDEN_SUM_EXPR - && code != SAD_EXPR + && !lane_reduc_code_p && reduction_type != FOLD_LEFT_REDUCTION)) - for (i = 0; i < op_type; i++) + for (i = 0; i < (int) op.num_ops; i++) if (!vect_maybe_update_slp_op_vectype (slp_op[i], vectype_in)) { if (dump_enabled_p ()) @@ -7206,10 +7205,7 @@ vectorizable_reduction (loop_vec_info loop_vinfo, /* Cost the reduction op inside the loop if transformed via vect_transform_reduction. Otherwise this is costed by the separate vectorizable_* routines. */ - if (single_defuse_cycle - || code == DOT_PROD_EXPR - || code == WIDEN_SUM_EXPR - || code == SAD_EXPR) + if (single_defuse_cycle || lane_reduc_code_p) record_stmt_cost (cost_vec, ncopies, vector_stmt, stmt_info, 0, vect_body); if (dump_enabled_p () @@ -7220,9 +7216,7 @@ vectorizable_reduction (loop_vec_info loop_vinfo, /* All but single defuse-cycle optimized, lane-reducing and fold-left reductions go through their own vectorizable_* routines. */ if (!single_defuse_cycle - && code != DOT_PROD_EXPR - && code != WIDEN_SUM_EXPR - && code != SAD_EXPR + && !lane_reduc_code_p && reduction_type != FOLD_LEFT_REDUCTION) { stmt_vec_info tem @@ -7238,10 +7232,10 @@ vectorizable_reduction (loop_vec_info loop_vinfo, else if (loop_vinfo && LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo)) { vec_loop_masks *masks = &LOOP_VINFO_MASKS (loop_vinfo); - internal_fn cond_fn = get_conditional_internal_fn (code); + internal_fn cond_fn = get_conditional_internal_fn (op.code, op.type); if (reduction_type != FOLD_LEFT_REDUCTION - && !use_mask_by_cond_expr_p (code, cond_fn, vectype_in) + && !use_mask_by_cond_expr_p (op.code, cond_fn, vectype_in) && (cond_fn == IFN_LAST || !direct_internal_fn_supported_p (cond_fn, vectype_in, OPTIMIZE_FOR_SPEED))) @@ -7294,24 +7288,11 @@ vect_transform_reduction (loop_vec_info loop_vinfo, gcc_assert (STMT_VINFO_DEF_TYPE (reduc_info) == vect_double_reduction_def); } - gassign *stmt = as_a (stmt_info->stmt); - enum tree_code code = gimple_assign_rhs_code (stmt); - int op_type = TREE_CODE_LENGTH (code); - - /* Flatten RHS. */ - tree ops[3]; - switch (get_gimple_rhs_class (code)) - { - case GIMPLE_TERNARY_RHS: - ops[2] = gimple_assign_rhs3 (stmt); - /* Fall thru. */ - case GIMPLE_BINARY_RHS: - ops[0] = gimple_assign_rhs1 (stmt); - ops[1] = gimple_assign_rhs2 (stmt); - break; - default: - gcc_unreachable (); - } + gimple_match_op op; + if (!gimple_extract_op (stmt_info->stmt, &op)) + gcc_unreachable (); + gcc_assert (op.code.is_tree_code ()); + auto code = tree_code (op.code); /* All uses but the last are expected to be defined in the loop. The last use is the reduction variable. In case of nested cycle this @@ -7359,7 +7340,7 @@ vect_transform_reduction (loop_vec_info loop_vinfo, internal_fn reduc_fn = STMT_VINFO_REDUC_FN (reduc_info); return vectorize_fold_left_reduction (loop_vinfo, stmt_info, gsi, vec_stmt, slp_node, reduc_def_phi, code, - reduc_fn, ops, vectype_in, reduc_index, masks); + reduc_fn, op.ops, vectype_in, reduc_index, masks); } bool single_defuse_cycle = STMT_VINFO_FORCE_SINGLE_CYCLE (reduc_info); @@ -7369,22 +7350,22 @@ vect_transform_reduction (loop_vec_info loop_vinfo, || code == SAD_EXPR); /* Create the destination vector */ - tree scalar_dest = gimple_assign_lhs (stmt); + tree scalar_dest = gimple_assign_lhs (stmt_info->stmt); tree vec_dest = vect_create_destination_var (scalar_dest, vectype_out); vect_get_vec_defs (loop_vinfo, stmt_info, slp_node, ncopies, single_defuse_cycle && reduc_index == 0 - ? NULL_TREE : ops[0], &vec_oprnds0, + ? NULL_TREE : op.ops[0], &vec_oprnds0, single_defuse_cycle && reduc_index == 1 - ? NULL_TREE : ops[1], &vec_oprnds1, - op_type == ternary_op + ? NULL_TREE : op.ops[1], &vec_oprnds1, + op.num_ops == 3 && !(single_defuse_cycle && reduc_index == 2) - ? ops[2] : NULL_TREE, &vec_oprnds2); + ? op.ops[2] : NULL_TREE, &vec_oprnds2); if (single_defuse_cycle) { gcc_assert (!slp_node); vect_get_vec_defs_for_operand (loop_vinfo, stmt_info, 1, - ops[reduc_index], + op.ops[reduc_index], reduc_index == 0 ? &vec_oprnds0 : (reduc_index == 1 ? &vec_oprnds1 : &vec_oprnds2)); @@ -7414,7 +7395,7 @@ vect_transform_reduction (loop_vec_info loop_vinfo, } else { - if (op_type == ternary_op) + if (op.num_ops == 3) vop[2] = vec_oprnds2[i]; if (masked_loop_p && mask_by_cond_expr) @@ -7546,7 +7527,7 @@ vect_transform_cycle_phi (loop_vec_info loop_vinfo, { tree initial_value = (num_phis == 1 ? initial_values[0] : NULL_TREE); - tree_code code = STMT_VINFO_REDUC_CODE (reduc_info); + code_helper code = STMT_VINFO_REDUC_CODE (reduc_info); tree neutral_op = neutral_op_for_reduction (TREE_TYPE (vectype_out), code, initial_value); @@ -7603,7 +7584,7 @@ vect_transform_cycle_phi (loop_vec_info loop_vinfo, if (!reduc_info->reduc_initial_values.is_empty ()) { initial_def = reduc_info->reduc_initial_values[0]; - enum tree_code code = STMT_VINFO_REDUC_CODE (reduc_info); + code_helper code = STMT_VINFO_REDUC_CODE (reduc_info); tree neutral_op = neutral_op_for_reduction (TREE_TYPE (initial_def), code, initial_def); @@ -7901,6 +7882,15 @@ vect_can_vectorize_without_simd_p (tree_code code) } } +/* Likewise, but taking a code_helper. */ + +bool +vect_can_vectorize_without_simd_p (code_helper code) +{ + return (code.is_tree_code () + && vect_can_vectorize_without_simd_p (tree_code (code))); +} + /* Function vectorizable_induction Check if STMT_INFO performs an induction computation that can be vectorized. diff --git a/gcc/tree-vect-patterns.c b/gcc/tree-vect-patterns.c index 854cbcff390..26421ee5511 100644 --- a/gcc/tree-vect-patterns.c +++ b/gcc/tree-vect-patterns.c @@ -5594,8 +5594,10 @@ vect_mark_pattern_stmts (vec_info *vinfo, /* Transfer reduction path info to the pattern. */ if (STMT_VINFO_REDUC_IDX (orig_stmt_info_saved) != -1) { - tree lookfor = gimple_op (orig_stmt_info_saved->stmt, - 1 + STMT_VINFO_REDUC_IDX (orig_stmt_info)); + gimple_match_op op; + if (!gimple_extract_op (orig_stmt_info_saved->stmt, &op)) + gcc_unreachable (); + tree lookfor = op.ops[STMT_VINFO_REDUC_IDX (orig_stmt_info)]; /* Search the pattern def sequence and the main pattern stmt. Note we may have inserted all into a containing pattern def sequence so the following is a bit awkward. */ @@ -5615,14 +5617,15 @@ vect_mark_pattern_stmts (vec_info *vinfo, do { bool found = false; - for (unsigned i = 1; i < gimple_num_ops (s); ++i) - if (gimple_op (s, i) == lookfor) - { - STMT_VINFO_REDUC_IDX (vinfo->lookup_stmt (s)) = i - 1; - lookfor = gimple_get_lhs (s); - found = true; - break; - } + if (gimple_extract_op (s, &op)) + for (unsigned i = 0; i < op.num_ops; ++i) + if (op.ops[i] == lookfor) + { + STMT_VINFO_REDUC_IDX (vinfo->lookup_stmt (s)) = i; + lookfor = gimple_get_lhs (s); + found = true; + break; + } if (s == pattern_stmt) { if (!found && dump_enabled_p ()) diff --git a/gcc/tree-vect-stmts.c b/gcc/tree-vect-stmts.c index 03cc7267cf8..1e197023b98 100644 --- a/gcc/tree-vect-stmts.c +++ b/gcc/tree-vect-stmts.c @@ -3202,7 +3202,6 @@ vectorizable_call (vec_info *vinfo, int ndts = ARRAY_SIZE (dt); int ncopies, j; auto_vec vargs; - auto_vec orig_vargs; enum { NARROW, NONE, WIDEN } modifier; size_t i, nargs; tree lhs; @@ -3426,6 +3425,8 @@ vectorizable_call (vec_info *vinfo, needs to be generated. */ gcc_assert (ncopies >= 1); + int reduc_idx = STMT_VINFO_REDUC_IDX (stmt_info); + internal_fn cond_fn = get_conditional_internal_fn (ifn); vec_loop_masks *masks = (loop_vinfo ? &LOOP_VINFO_MASKS (loop_vinfo) : NULL); if (!vec_stmt) /* transformation not required. */ { @@ -3446,14 +3447,33 @@ vectorizable_call (vec_info *vinfo, record_stmt_cost (cost_vec, ncopies / 2, vec_promote_demote, stmt_info, 0, vect_body); - if (loop_vinfo && mask_opno >= 0) + if (loop_vinfo + && LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo) + && (reduc_idx >= 0 || mask_opno >= 0)) { - unsigned int nvectors = (slp_node - ? SLP_TREE_NUMBER_OF_VEC_STMTS (slp_node) - : ncopies); - tree scalar_mask = gimple_call_arg (stmt_info->stmt, mask_opno); - vect_record_loop_mask (loop_vinfo, masks, nvectors, - vectype_out, scalar_mask); + if (reduc_idx >= 0 + && (cond_fn == IFN_LAST + || !direct_internal_fn_supported_p (cond_fn, vectype_out, + OPTIMIZE_FOR_SPEED))) + { + if (dump_enabled_p ()) + dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, + "can't use a fully-masked loop because no" + " conditional operation is available.\n"); + LOOP_VINFO_CAN_USE_PARTIAL_VECTORS_P (loop_vinfo) = false; + } + else + { + unsigned int nvectors + = (slp_node + ? SLP_TREE_NUMBER_OF_VEC_STMTS (slp_node) + : ncopies); + tree scalar_mask = NULL_TREE; + if (mask_opno >= 0) + scalar_mask = gimple_call_arg (stmt_info->stmt, mask_opno); + vect_record_loop_mask (loop_vinfo, masks, nvectors, + vectype_out, scalar_mask); + } } return true; } @@ -3468,12 +3488,17 @@ vectorizable_call (vec_info *vinfo, vec_dest = vect_create_destination_var (scalar_dest, vectype_out); bool masked_loop_p = loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo); + unsigned int vect_nargs = nargs; + if (masked_loop_p && reduc_idx >= 0) + { + ifn = cond_fn; + vect_nargs += 2; + } if (modifier == NONE || ifn != IFN_LAST) { tree prev_res = NULL_TREE; - vargs.safe_grow (nargs, true); - orig_vargs.safe_grow (nargs, true); + vargs.safe_grow (vect_nargs, true); auto_vec > vec_defs (nargs); for (j = 0; j < ncopies; ++j) { @@ -3488,12 +3513,23 @@ vectorizable_call (vec_info *vinfo, /* Arguments are ready. Create the new vector stmt. */ FOR_EACH_VEC_ELT (vec_oprnds0, i, vec_oprnd0) { + int varg = 0; + if (masked_loop_p && reduc_idx >= 0) + { + unsigned int vec_num = vec_oprnds0.length (); + /* Always true for SLP. */ + gcc_assert (ncopies == 1); + vargs[varg++] = vect_get_loop_mask (gsi, masks, vec_num, + vectype_out, i); + } size_t k; for (k = 0; k < nargs; k++) { vec vec_oprndsk = vec_defs[k]; - vargs[k] = vec_oprndsk[i]; + vargs[varg++] = vec_oprndsk[i]; } + if (masked_loop_p && reduc_idx >= 0) + vargs[varg++] = vargs[reduc_idx + 1]; gimple *new_stmt; if (modifier == NARROW) { @@ -3546,6 +3582,10 @@ vectorizable_call (vec_info *vinfo, continue; } + int varg = 0; + if (masked_loop_p && reduc_idx >= 0) + vargs[varg++] = vect_get_loop_mask (gsi, masks, ncopies, + vectype_out, j); for (i = 0; i < nargs; i++) { op = gimple_call_arg (stmt, i); @@ -3556,8 +3596,10 @@ vectorizable_call (vec_info *vinfo, op, &vec_defs[i], vectypes[i]); } - orig_vargs[i] = vargs[i] = vec_defs[i][j]; + vargs[varg++] = vec_defs[i][j]; } + if (masked_loop_p && reduc_idx >= 0) + vargs[varg++] = vargs[reduc_idx + 1]; if (mask_opno >= 0 && masked_loop_p) { diff --git a/gcc/tree-vectorizer.h b/gcc/tree-vectorizer.h index f8f30641512..8330cd897b8 100644 --- a/gcc/tree-vectorizer.h +++ b/gcc/tree-vectorizer.h @@ -28,6 +28,7 @@ typedef class _stmt_vec_info *stmt_vec_info; #include "target.h" #include "internal-fn.h" #include "tree-ssa-operands.h" +#include "gimple-match.h" /* Used for naming of new temporaries. */ enum vect_var_kind { @@ -1192,7 +1193,7 @@ public: enum vect_reduction_type reduc_type; /* The original reduction code, to be used in the epilogue. */ - enum tree_code reduc_code; + code_helper reduc_code; /* An internal function we should use in the epilogue. */ internal_fn reduc_fn; @@ -2151,7 +2152,7 @@ extern tree vect_create_addr_base_for_vector_ref (vec_info *, tree); /* In tree-vect-loop.c. */ -extern tree neutral_op_for_reduction (tree, tree_code, tree); +extern tree neutral_op_for_reduction (tree, code_helper, tree); extern widest_int vect_iv_limit_for_partial_vectors (loop_vec_info loop_vinfo); bool vect_rgroup_iv_might_wrap_p (loop_vec_info, rgroup_controls *); /* Used in tree-vect-loop-manip.c */ @@ -2160,7 +2161,7 @@ extern opt_result vect_determine_partial_vectors_and_peeling (loop_vec_info, /* Used in gimple-loop-interchange.c and tree-parloops.c. */ extern bool check_reduction_path (dump_user_location_t, loop_p, gphi *, tree, enum tree_code); -extern bool needs_fold_left_reduction_p (tree, tree_code); +extern bool needs_fold_left_reduction_p (tree, code_helper); /* Drive for loop analysis stage. */ extern opt_loop_vec_info vect_analyze_loop (class loop *, vec_info_shared *); extern tree vect_build_loop_niters (loop_vec_info, bool * = NULL); @@ -2178,7 +2179,7 @@ extern tree vect_get_loop_len (loop_vec_info, vec_loop_lens *, unsigned int, unsigned int); extern gimple_seq vect_gen_len (tree, tree, tree, tree); extern stmt_vec_info info_for_reduction (vec_info *, stmt_vec_info); -extern bool reduction_fn_for_scalar_code (enum tree_code, internal_fn *); +extern bool reduction_fn_for_scalar_code (code_helper, internal_fn *); /* Drive for loop transformation stage. */ extern class loop *vect_transform_loop (loop_vec_info, gimple *); @@ -2216,6 +2217,7 @@ extern bool vectorizable_phi (vec_info *, stmt_vec_info, gimple **, slp_tree, stmt_vector_for_cost *); extern bool vect_emulated_vector_p (tree); extern bool vect_can_vectorize_without_simd_p (tree_code); +extern bool vect_can_vectorize_without_simd_p (code_helper); extern int vect_get_known_peeling_cost (loop_vec_info, int, int *, stmt_vector_for_cost *, stmt_vector_for_cost *, From patchwork Wed Nov 10 12:47:22 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 47403 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 4E50E3857805 for ; Wed, 10 Nov 2021 12:49:20 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 4E50E3857805 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1636548560; bh=5w/pTtpipKW+aU6VCcNP8GV8VF4xzTtm9Jv+8QjtfGo=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=sZRxx4Dx9cPrhDYVgFTo+2qauvkBPZHpF/w3qlEMwcXxQFZ7QLEJCgiNWNmhcov+Z pKqOblfACzu44gS60BOcJo8r7pOyHiz0EI+8YTCV3q7b6DJ5wMiXyNpmoFI+1Sjmk8 1Y8aAKb3LKp//ND/zqjqHv1NeEoTtD3v8cSpLC0E= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by sourceware.org (Postfix) with ESMTP id 64DDE3857C7B for ; Wed, 10 Nov 2021 12:47:24 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 64DDE3857C7B Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2B234101E for ; Wed, 10 Nov 2021 04:47:24 -0800 (PST) Received: from localhost (unknown [10.32.98.88]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id A94483F5A1 for ; Wed, 10 Nov 2021 04:47:23 -0800 (PST) To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, richard.sandiford@arm.com Subject: [PATCH 5/5] vect: Add support for fmax and fmin reductions Date: Wed, 10 Nov 2021 12:47:22 +0000 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux) MIME-Version: 1.0 X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, KAM_SHORT, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Richard Sandiford via Gcc-patches From: Richard Sandiford Reply-To: Richard Sandiford Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" This patch adds support for reductions involving calls to fmax*() and fmin*(), without the -ffast-math flags that allow them to be converted to MAX_EXPR and MIN_EXPR. Tested on aarch64-linux-gnu and x86_64-linux-gnu. OK to install? Richard gcc/ * doc/md.texi (reduc_fmin_scal_@var{m}): Document. (reduc_fmax_scal_@var{m}): Likewise. * optabs.def (reduc_fmax_scal_optab): New optab. (reduc_fmin_scal_optab): Likewise * internal-fn.def (REDUC_FMAX, REDUC_FMIN): New functions. * tree-vect-loop.c (reduction_fn_for_scalar_code): Handle CASE_CFN_FMAX and CASE_CFN_FMIN. (neutral_op_for_reduction): Likewise. (needs_fold_left_reduction_p): Likewise. * config/aarch64/iterators.md (FMAXMINV): New iterator. (fmaxmin): Handle UNSPEC_FMAXNMV and UNSPEC_FMINNMV. * config/aarch64/aarch64-simd.md (reduc__scal_): Fix unspec mode. (reduc__scal_): New pattern. * config/aarch64/aarch64-sve.md (reduc__scal_): Likewise. gcc/testsuite/ * gcc.dg/vect/vect-fmax-1.c: New test. * gcc.dg/vect/vect-fmax-2.c: Likewise. * gcc.dg/vect/vect-fmax-3.c: Likewise. * gcc.dg/vect/vect-fmin-1.c: New test. * gcc.dg/vect/vect-fmin-2.c: Likewise. * gcc.dg/vect/vect-fmin-3.c: Likewise. * gcc.target/aarch64/fmaxnm_1.c: Likewise. * gcc.target/aarch64/fmaxnm_2.c: Likewise. * gcc.target/aarch64/fminnm_1.c: Likewise. * gcc.target/aarch64/fminnm_2.c: Likewise. * gcc.target/aarch64/sve/fmaxnm_1.c: Likewise. * gcc.target/aarch64/sve/fmaxnm_2.c: Likewise. * gcc.target/aarch64/sve/fminnm_1.c: Likewise. * gcc.target/aarch64/sve/fminnm_2.c: Likewise. --- gcc/config/aarch64/aarch64-simd.md | 15 +++- gcc/config/aarch64/aarch64-sve.md | 11 +++ gcc/config/aarch64/iterators.md | 4 + gcc/doc/md.texi | 8 ++ gcc/internal-fn.def | 4 + gcc/optabs.def | 2 + gcc/testsuite/gcc.dg/vect/vect-fmax-1.c | 83 ++++++++++++++++++ gcc/testsuite/gcc.dg/vect/vect-fmax-2.c | 7 ++ gcc/testsuite/gcc.dg/vect/vect-fmax-3.c | 83 ++++++++++++++++++ gcc/testsuite/gcc.dg/vect/vect-fmin-1.c | 86 +++++++++++++++++++ gcc/testsuite/gcc.dg/vect/vect-fmin-2.c | 9 ++ gcc/testsuite/gcc.dg/vect/vect-fmin-3.c | 83 ++++++++++++++++++ gcc/testsuite/gcc.target/aarch64/fmaxnm_1.c | 24 ++++++ gcc/testsuite/gcc.target/aarch64/fmaxnm_2.c | 20 +++++ gcc/testsuite/gcc.target/aarch64/fminnm_1.c | 24 ++++++ gcc/testsuite/gcc.target/aarch64/fminnm_2.c | 20 +++++ .../gcc.target/aarch64/sve/fmaxnm_2.c | 22 +++++ .../gcc.target/aarch64/sve/fmaxnm_3.c | 18 ++++ .../gcc.target/aarch64/sve/fminnm_2.c | 22 +++++ .../gcc.target/aarch64/sve/fminnm_3.c | 18 ++++ gcc/tree-vect-loop.c | 45 ++++++++-- 21 files changed, 599 insertions(+), 9 deletions(-) create mode 100644 gcc/testsuite/gcc.dg/vect/vect-fmax-1.c create mode 100644 gcc/testsuite/gcc.dg/vect/vect-fmax-2.c create mode 100644 gcc/testsuite/gcc.dg/vect/vect-fmax-3.c create mode 100644 gcc/testsuite/gcc.dg/vect/vect-fmin-1.c create mode 100644 gcc/testsuite/gcc.dg/vect/vect-fmin-2.c create mode 100644 gcc/testsuite/gcc.dg/vect/vect-fmin-3.c create mode 100644 gcc/testsuite/gcc.target/aarch64/fmaxnm_1.c create mode 100644 gcc/testsuite/gcc.target/aarch64/fmaxnm_2.c create mode 100644 gcc/testsuite/gcc.target/aarch64/fminnm_1.c create mode 100644 gcc/testsuite/gcc.target/aarch64/fminnm_2.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/fmaxnm_2.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/fmaxnm_3.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/fminnm_2.c create mode 100644 gcc/testsuite/gcc.target/aarch64/sve/fminnm_3.c diff --git a/gcc/config/aarch64/aarch64-simd.md b/gcc/config/aarch64/aarch64-simd.md index 35d55a3e51e..8e7d783f7f3 100644 --- a/gcc/config/aarch64/aarch64-simd.md +++ b/gcc/config/aarch64/aarch64-simd.md @@ -3624,8 +3624,8 @@ (define_insn "popcount2" ;; gimple_fold'd to the IFN_REDUC_(MAX|MIN) function. (This is FP smax/smin). (define_expand "reduc__scal_" [(match_operand: 0 "register_operand") - (unspec:VHSDF [(match_operand:VHSDF 1 "register_operand")] - FMAXMINV)] + (unspec: [(match_operand:VHSDF 1 "register_operand")] + FMAXMINV)] "TARGET_SIMD" { rtx elt = aarch64_endian_lane_rtx (mode, 0); @@ -3637,6 +3637,17 @@ (define_expand "reduc__scal_" } ) +(define_expand "reduc__scal_" + [(match_operand: 0 "register_operand") + (unspec: [(match_operand:VHSDF 1 "register_operand")] + FMAXMINNMV)] + "TARGET_SIMD" + { + emit_insn (gen_reduc__scal_ (operands[0], operands[1])); + DONE; + } +) + ;; Likewise for integer cases, signed and unsigned. (define_expand "reduc__scal_" [(match_operand: 0 "register_operand") diff --git a/gcc/config/aarch64/aarch64-sve.md b/gcc/config/aarch64/aarch64-sve.md index 0f5bf5ea8cb..9ef968840c2 100644 --- a/gcc/config/aarch64/aarch64-sve.md +++ b/gcc/config/aarch64/aarch64-sve.md @@ -8566,6 +8566,17 @@ (define_expand "reduc__scal_" } ) +(define_expand "reduc__scal_" + [(match_operand: 0 "register_operand") + (unspec: [(match_operand:SVE_FULL_F 1 "register_operand")] + FMAXMINNMV)] + "TARGET_SVE" + { + emit_insn (gen_reduc__scal_ (operands[0], operands[1])); + DONE; + } +) + ;; Predicated floating-point tree reductions. (define_insn "@aarch64_pred_reduc__" [(set (match_operand: 0 "register_operand" "=w") diff --git a/gcc/config/aarch64/iterators.md b/gcc/config/aarch64/iterators.md index e8eebd863a6..fb568ddc4a0 100644 --- a/gcc/config/aarch64/iterators.md +++ b/gcc/config/aarch64/iterators.md @@ -2510,6 +2510,8 @@ (define_int_iterator MAXMINV [UNSPEC_UMAXV UNSPEC_UMINV (define_int_iterator FMAXMINV [UNSPEC_FMAXV UNSPEC_FMINV UNSPEC_FMAXNMV UNSPEC_FMINNMV]) +(define_int_iterator FMAXMINNMV [UNSPEC_FMAXNMV UNSPEC_FMINNMV]) + (define_int_iterator SVE_INT_ADDV [UNSPEC_SADDV UNSPEC_UADDV]) (define_int_iterator USADDLP [UNSPEC_SADDLP UNSPEC_UADDLP]) @@ -3216,8 +3218,10 @@ (define_int_attr optab [(UNSPEC_ANDF "and") (define_int_attr fmaxmin [(UNSPEC_FMAX "fmax_nan") (UNSPEC_FMAXNM "fmax") + (UNSPEC_FMAXNMV "fmax") (UNSPEC_FMIN "fmin_nan") (UNSPEC_FMINNM "fmin") + (UNSPEC_FMINNMV "fmin") (UNSPEC_COND_FMAXNM "fmax") (UNSPEC_COND_FMINNM "fmin")]) diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi index 589f841ea74..8fd0f8d2fe1 100644 --- a/gcc/doc/md.texi +++ b/gcc/doc/md.texi @@ -5400,6 +5400,14 @@ Find the unsigned minimum/maximum of the elements of a vector. The vector is operand 1, and operand 0 is the scalar result, with mode equal to the mode of the elements of the input vector. +@cindex @code{reduc_fmin_scal_@var{m}} instruction pattern +@cindex @code{reduc_fmax_scal_@var{m}} instruction pattern +@item @samp{reduc_fmin_scal_@var{m}}, @samp{reduc_fmax_scal_@var{m}} +Find the floating-point minimum/maximum of the elements of a vector, +using the same rules as @code{fmin@var{m}3} and @code{fmax@var{m}3}. +Operand 1 is a vector of mode @var{m} and operand 0 is the scalar +result, which has mode @code{GET_MODE_INNER (@var{m})}. + @cindex @code{reduc_plus_scal_@var{m}} instruction pattern @item @samp{reduc_plus_scal_@var{m}} Compute the sum of the elements of a vector. The vector is operand 1, and diff --git a/gcc/internal-fn.def b/gcc/internal-fn.def index bb4d8ab8096..acb0dbda556 100644 --- a/gcc/internal-fn.def +++ b/gcc/internal-fn.def @@ -216,6 +216,10 @@ DEF_INTERNAL_SIGNED_OPTAB_FN (REDUC_MAX, ECF_CONST | ECF_NOTHROW, first, reduc_smax_scal, reduc_umax_scal, unary) DEF_INTERNAL_SIGNED_OPTAB_FN (REDUC_MIN, ECF_CONST | ECF_NOTHROW, first, reduc_smin_scal, reduc_umin_scal, unary) +DEF_INTERNAL_OPTAB_FN (REDUC_FMAX, ECF_CONST | ECF_NOTHROW, + reduc_fmax_scal, unary) +DEF_INTERNAL_OPTAB_FN (REDUC_FMIN, ECF_CONST | ECF_NOTHROW, + reduc_fmin_scal, unary) DEF_INTERNAL_OPTAB_FN (REDUC_AND, ECF_CONST | ECF_NOTHROW, reduc_and_scal, unary) DEF_INTERNAL_OPTAB_FN (REDUC_IOR, ECF_CONST | ECF_NOTHROW, diff --git a/gcc/optabs.def b/gcc/optabs.def index e25f4c9a346..cef6054b378 100644 --- a/gcc/optabs.def +++ b/gcc/optabs.def @@ -335,6 +335,8 @@ OPTAB_D (fmax_optab, "fmax$a3") OPTAB_D (fmin_optab, "fmin$a3") /* Vector reduction to a scalar. */ +OPTAB_D (reduc_fmax_scal_optab, "reduc_fmax_scal_$a") +OPTAB_D (reduc_fmin_scal_optab, "reduc_fmin_scal_$a") OPTAB_D (reduc_smax_scal_optab, "reduc_smax_scal_$a") OPTAB_D (reduc_smin_scal_optab, "reduc_smin_scal_$a") OPTAB_D (reduc_plus_scal_optab, "reduc_plus_scal_$a") diff --git a/gcc/testsuite/gcc.dg/vect/vect-fmax-1.c b/gcc/testsuite/gcc.dg/vect/vect-fmax-1.c new file mode 100644 index 00000000000..841ffab5666 --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-fmax-1.c @@ -0,0 +1,83 @@ +#include "tree-vect.h" + +#ifndef TYPE +#define TYPE float +#define FN __builtin_fmaxf +#endif + +TYPE __attribute__((noipa)) +test (TYPE x, TYPE *ptr, int n) +{ + for (int i = 0; i < n; ++i) + x = FN (x, ptr[i]); + return x; +} + +#define N 128 +#define HALF (N / 2) + +int +main (void) +{ + check_vect (); + + TYPE a[N]; + + for (int i = 0; i < N; ++i) + a[i] = i; + + if (test (-1, a, 1) != 0) + __builtin_abort (); + if (test (-1, a, 64) != 63) + __builtin_abort (); + if (test (-1, a, 65) != 64) + __builtin_abort (); + if (test (-1, a, 66) != 65) + __builtin_abort (); + if (test (-1, a, 67) != 66) + __builtin_abort (); + if (test (-1, a, 128) != 127) + __builtin_abort (); + if (test (127, a, 128) != 127) + __builtin_abort (); + if (test (128, a, 128) != 128) + __builtin_abort (); + + for (int i = 0; i < N; ++i) + a[i] = -i; + + if (test (-60, a, 4) != 0) + __builtin_abort (); + if (test (0, a, 4) != 0) + __builtin_abort (); + if (test (1, a, 4) != 1) + __builtin_abort (); + + for (int i = 0; i < HALF; ++i) + { + a[i] = i; + a[HALF + i] = HALF - i; + } + + if (test (0, a, HALF - 16) != HALF - 17) + __builtin_abort (); + if (test (0, a, HALF - 2) != HALF - 3) + __builtin_abort (); + if (test (0, a, HALF - 1) != HALF - 2) + __builtin_abort (); + if (test (0, a, HALF) != HALF - 1) + __builtin_abort (); + if (test (0, a, HALF + 1) != HALF) + __builtin_abort (); + if (test (0, a, HALF + 2) != HALF) + __builtin_abort (); + if (test (0, a, HALF + 3) != HALF) + __builtin_abort (); + if (test (0, a, HALF + 16) != HALF) + __builtin_abort (); + + return 0; +} + +/* { dg-final { scan-tree-dump "Detected reduction" "vect" } } */ +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" { target vect_max_reduc } } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-fmax-2.c b/gcc/testsuite/gcc.dg/vect/vect-fmax-2.c new file mode 100644 index 00000000000..3d1f64416d5 --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-fmax-2.c @@ -0,0 +1,7 @@ +#define TYPE double +#define FN __builtin_fmax + +#include "vect-fmax-1.c" + +/* { dg-final { scan-tree-dump "Detected reduction" "vect" } } */ +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" { target vect_max_reduc } } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-fmax-3.c b/gcc/testsuite/gcc.dg/vect/vect-fmax-3.c new file mode 100644 index 00000000000..f711ed0563e --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-fmax-3.c @@ -0,0 +1,83 @@ +#include "tree-vect.h" + +void __attribute__((noipa)) +test (double x0, double x1, double *restrict res, double *restrict ptr, int n) +{ + for (int i = 0; i < n; i += 2) + { + x0 = __builtin_fmax (x0, ptr[i + 0]); + x1 = __builtin_fmax (x1, ptr[i + 1]); + } + res[0] = x0; + res[1] = x1; +} + +#define N 128 +#define HALF (N / 2) + +int +main (void) +{ + check_vect (); + + double res[2], a[N]; + + for (int i = 0; i < N; i += 2) + { + a[i] = i < HALF ? i : HALF; + a[i + 1] = i / 8; + } + + test (-1, -1, res, a, 2); + if (res[0] != 0 || res[1] != 0) + __builtin_abort (); + + test (-1, -1, res, a, 6); + if (res[0] != 4 || res[1] != 0) + __builtin_abort (); + + test (-1, -1, res, a, 8); + if (res[0] != 6 || res[1] != 0) + __builtin_abort (); + + test (-1, -1, res, a, 10); + if (res[0] != 8 || res[1] != 1) + __builtin_abort (); + + test (-1, -1, res, a, HALF - 2); + if (res[0] != HALF - 4 || res[1] != HALF / 8 - 1) + __builtin_abort (); + + test (-1, -1, res, a, HALF); + if (res[0] != HALF - 2 || res[1] != HALF / 8 - 1) + __builtin_abort (); + + test (-1, -1, res, a, HALF + 2); + if (res[0] != HALF || res[1] != HALF / 8) + __builtin_abort (); + + test (-1, -1, res, a, HALF + 8); + if (res[0] != HALF || res[1] != HALF / 8) + __builtin_abort (); + + test (-1, -1, res, a, HALF + 10); + if (res[0] != HALF || res[1] != HALF / 8 + 1) + __builtin_abort (); + + test (-1, -1, res, a, N); + if (res[0] != HALF || res[1] != N / 8 - 1) + __builtin_abort (); + + test (HALF + 1, -1, res, a, N); + if (res[0] != HALF + 1 || res[1] != N / 8 - 1) + __builtin_abort (); + + test (HALF + 1, N, res, a, N); + if (res[0] != HALF + 1 || res[1] != N) + __builtin_abort (); + + return 0; +} + +/* { dg-final { scan-tree-dump "Detected reduction" "vect" } } */ +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" { target vect_max_reduc } } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-fmin-1.c b/gcc/testsuite/gcc.dg/vect/vect-fmin-1.c new file mode 100644 index 00000000000..3d5f843a9db --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-fmin-1.c @@ -0,0 +1,86 @@ +#include "tree-vect.h" + +#ifndef TYPE +#define TYPE float +#define FN __builtin_fminf +#endif + +TYPE __attribute__((noipa)) +test (TYPE x, TYPE *ptr, int n) +{ + for (int i = 0; i < n; ++i) + x = FN (x, ptr[i]); + return x; +} + +#define N 128 +#define HALF (N / 2) + +int +main (void) +{ + check_vect (); + + TYPE a[N]; + + for (int i = 0; i < N; ++i) + a[i] = -i; + + if (test (1, a, 1) != 0) + __builtin_abort (); + if (test (1, a, 64) != -63) + __builtin_abort (); + if (test (1, a, 65) != -64) + __builtin_abort (); + if (test (1, a, 66) != -65) + __builtin_abort (); + if (test (1, a, 67) != -66) + __builtin_abort (); + if (test (1, a, 128) != -127) + __builtin_abort (); + if (test (-127, a, 128) != -127) + __builtin_abort (); + if (test (-128, a, 128) != -128) + __builtin_abort (); + + for (int i = 0; i < N; ++i) + a[i] = i; + + if (test (1, a, 4) != 0) + __builtin_abort (); + if (test (0, a, 4) != 0) + __builtin_abort (); + if (test (-1, a, 4) != -1) + __builtin_abort (); + + for (int i = 0; i < HALF; ++i) + { + a[i] = HALF - i; + a[HALF + i] = i; + } + + if (test (N, a, HALF - 16) != 17) + __builtin_abort (); + if (test (N, a, HALF - 2) != 3) + __builtin_abort (); + if (test (N, a, HALF - 1) != 2) + __builtin_abort (); + if (test (N, a, HALF) != 1) + __builtin_abort (); + if (test (N, a, HALF + 1) != 0) + __builtin_abort (); + if (test (N, a, HALF + 2) != 0) + __builtin_abort (); + if (test (N, a, HALF + 3) != 0) + __builtin_abort (); + if (test (N, a, HALF + 16) != 0) + __builtin_abort (); + + return 0; +} + +/* { dg-final { scan-tree-dump "Detected reduction" "vect" } } */ +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" { target vect_max_reduc } } } */ + +/* { dg-final { scan-tree-dump "Detected reduction" "vect" } } */ +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" { target vect_max_reduc } } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-fmin-2.c b/gcc/testsuite/gcc.dg/vect/vect-fmin-2.c new file mode 100644 index 00000000000..21e45cca55a --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-fmin-2.c @@ -0,0 +1,9 @@ +#ifndef TYPE +#define TYPE double +#define FN __builtin_fmin +#endif + +#include "vect-fmin-1.c" + +/* { dg-final { scan-tree-dump "Detected reduction" "vect" } } */ +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" { target vect_max_reduc } } } */ diff --git a/gcc/testsuite/gcc.dg/vect/vect-fmin-3.c b/gcc/testsuite/gcc.dg/vect/vect-fmin-3.c new file mode 100644 index 00000000000..cc38bf43909 --- /dev/null +++ b/gcc/testsuite/gcc.dg/vect/vect-fmin-3.c @@ -0,0 +1,83 @@ +#include "tree-vect.h" + +void __attribute__((noipa)) +test (double x0, double x1, double *restrict res, double *restrict ptr, int n) +{ + for (int i = 0; i < n; i += 2) + { + x0 = __builtin_fmin (x0, ptr[i + 0]); + x1 = __builtin_fmin (x1, ptr[i + 1]); + } + res[0] = x0; + res[1] = x1; +} + +#define N 128 +#define HALF (N / 2) + +int +main (void) +{ + check_vect (); + + double res[2], a[N]; + + for (int i = 0; i < N; i += 2) + { + a[i] = i < HALF ? HALF - i : 0; + a[i + 1] = -i / 8; + } + + test (N, N, res, a, 2); + if (res[0] != HALF || res[1] != 0) + __builtin_abort (); + + test (N, N, res, a, 6); + if (res[0] != HALF - 4 || res[1] != 0) + __builtin_abort (); + + test (N, N, res, a, 8); + if (res[0] != HALF - 6 || res[1] != 0) + __builtin_abort (); + + test (N, N, res, a, 10); + if (res[0] != HALF - 8 || res[1] != -1) + __builtin_abort (); + + test (N, N, res, a, HALF - 2); + if (res[0] != 4 || res[1] != -HALF / 8 + 1) + __builtin_abort (); + + test (N, N, res, a, HALF); + if (res[0] != 2 || res[1] != -HALF / 8 + 1) + __builtin_abort (); + + test (N, N, res, a, HALF + 2); + if (res[0] != 0 || res[1] != -HALF / 8) + __builtin_abort (); + + test (N, N, res, a, HALF + 8); + if (res[0] != 0 || res[1] != -HALF / 8) + __builtin_abort (); + + test (N, N, res, a, HALF + 10); + if (res[0] != 0 || res[1] != -HALF / 8 - 1) + __builtin_abort (); + + test (N, N, res, a, N); + if (res[0] != 0 || res[1] != -N / 8 + 1) + __builtin_abort (); + + test (-1, N, res, a, N); + if (res[0] != -1 || res[1] != -N / 8 + 1) + __builtin_abort (); + + test (-1, -N / 8, res, a, N); + if (res[0] != -1 || res[1] != -N / 8) + __builtin_abort (); + + return 0; +} + +/* { dg-final { scan-tree-dump "Detected reduction" "vect" } } */ +/* { dg-final { scan-tree-dump "LOOP VECTORIZED" "vect" { target vect_max_reduc } } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/fmaxnm_1.c b/gcc/testsuite/gcc.target/aarch64/fmaxnm_1.c new file mode 100644 index 00000000000..40c36c7a3dc --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/fmaxnm_1.c @@ -0,0 +1,24 @@ +/* { dg-options "-O2 -ftree-vectorize -fno-vect-cost-model" } */ + +#pragma GCC target "+nosve" + +float +f1 (float x, float *ptr) +{ + for (int i = 0; i < 128; ++i) + x = __builtin_fmaxf (x, ptr[i]); + return x; +} + +double +f2 (double x, double *ptr) +{ + for (int i = 0; i < 128; ++i) + x = __builtin_fmax (x, ptr[i]); + return x; +} + +/* { dg-final { scan-assembler-times {\tfmaxnm\tv[0-9]+\.4s, v[0-9]+\.4s, v[0-9]+\.4s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnmv\ts[0-9]+, v[0-9]+\.4s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnm\tv[0-9]+\.2d, v[0-9]+\.2d, v[0-9]+\.2d\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfmaxnmp\td[0-9]+, v[0-9]+\.2d\n} 1 } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/fmaxnm_2.c b/gcc/testsuite/gcc.target/aarch64/fmaxnm_2.c new file mode 100644 index 00000000000..6e48ac8eeee --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/fmaxnm_2.c @@ -0,0 +1,20 @@ +/* { dg-options "-O2 -ftree-vectorize -fno-vect-cost-model" } */ + +#pragma GCC target "+nosve" + +void +f (double *restrict res, double *restrict ptr) +{ + double x0 = res[0]; + double x1 = res[1]; + for (int i = 0; i < 128; i += 2) + { + x0 = __builtin_fmax (x0, ptr[i + 0]); + x1 = __builtin_fmax (x1, ptr[i + 1]); + } + res[0] = x0; + res[1] = x1; +} + +/* { dg-final { scan-assembler-times {\tfmaxnm\tv[0-9]+\.2d, v[0-9]+\.2d, v[0-9]+\.2d\n} 1 } } */ +/* { dg-final { scan-assembler {\tstr\tq[0-9]+, \[x0\]\n} } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/fminnm_1.c b/gcc/testsuite/gcc.target/aarch64/fminnm_1.c new file mode 100644 index 00000000000..1cf372b2a6b --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/fminnm_1.c @@ -0,0 +1,24 @@ +/* { dg-options "-O2 -ftree-vectorize -fno-vect-cost-model" } */ + +#pragma GCC target "+nosve" + +float +f1 (float x, float *ptr) +{ + for (int i = 0; i < 128; ++i) + x = __builtin_fminf (x, ptr[i]); + return x; +} + +double +f2 (double x, double *ptr) +{ + for (int i = 0; i < 128; ++i) + x = __builtin_fmin (x, ptr[i]); + return x; +} + +/* { dg-final { scan-assembler-times {\tfminnm\tv[0-9]+\.4s, v[0-9]+\.4s, v[0-9]+\.4s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnmv\ts[0-9]+, v[0-9]+\.4s\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnm\tv[0-9]+\.2d, v[0-9]+\.2d, v[0-9]+\.2d\n} 1 } } */ +/* { dg-final { scan-assembler-times {\tfminnmp\td[0-9]+, v[0-9]+\.2d\n} 1 } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/fminnm_2.c b/gcc/testsuite/gcc.target/aarch64/fminnm_2.c new file mode 100644 index 00000000000..543e1884051 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/fminnm_2.c @@ -0,0 +1,20 @@ +/* { dg-options "-O2 -ftree-vectorize -fno-vect-cost-model" } */ + +#pragma GCC target "+nosve" + +void +f (double *restrict res, double *restrict ptr) +{ + double x0 = res[0]; + double x1 = res[1]; + for (int i = 0; i < 128; i += 2) + { + x0 = __builtin_fmin (x0, ptr[i + 0]); + x1 = __builtin_fmin (x1, ptr[i + 1]); + } + res[0] = x0; + res[1] = x1; +} + +/* { dg-final { scan-assembler-times {\tfminnm\tv[0-9]+\.2d, v[0-9]+\.2d, v[0-9]+\.2d\n} 1 } } */ +/* { dg-final { scan-assembler {\tstr\tq[0-9]+, \[x0\]\n} } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/fmaxnm_2.c b/gcc/testsuite/gcc.target/aarch64/sve/fmaxnm_2.c new file mode 100644 index 00000000000..ee3cdc20f96 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/fmaxnm_2.c @@ -0,0 +1,22 @@ +/* { dg-options "-O2 -ftree-vectorize -fno-vect-cost-model" } */ + +float +f1 (float x, float *ptr) +{ + for (int i = 0; i < 128; ++i) + x = __builtin_fmaxf (x, ptr[i]); + return x; +} + +double +f2 (double x, double *ptr) +{ + for (int i = 0; i < 128; ++i) + x = __builtin_fmax (x, ptr[i]); + return x; +} + +/* { dg-final { scan-assembler {\twhilelo\t(p[0-7])\.s,.*\tfmaxnm\tz[0-9]+\.s, \1/m, z[0-9]+\.s, z[0-9]+\.s\n} } } */ +/* { dg-final { scan-assembler-times {\tfmaxnmv\ts[0-9]+, p[0-7], z[0-9]+\.s\n} 1 } } */ +/* { dg-final { scan-assembler {\twhilelo\t(p[0-7])\.d,.*\tfmaxnm\tz[0-9]+\.d, \1/m, z[0-9]+\.d, z[0-9]+\.d\n} } } */ +/* { dg-final { scan-assembler-times {\tfmaxnmv\td[0-9]+, p[0-7], z[0-9]+\.d\n} 1 } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/fmaxnm_3.c b/gcc/testsuite/gcc.target/aarch64/sve/fmaxnm_3.c new file mode 100644 index 00000000000..a8eee0f4b26 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/fmaxnm_3.c @@ -0,0 +1,18 @@ +/* { dg-options "-O2 -ftree-vectorize -fno-vect-cost-model" } */ + +void +f (double *restrict res, double *restrict ptr) +{ + double x0 = res[0]; + double x1 = res[1]; + for (int i = 0; i < 128; i += 2) + { + x0 = __builtin_fmax (x0, ptr[i + 0]); + x1 = __builtin_fmax (x1, ptr[i + 1]); + } + res[0] = x0; + res[1] = x1; +} + +/* { dg-final { scan-assembler {\twhilelo\t(p[0-7])\.d,.*\tfmaxnm\tz[0-9]+\.d, \1/m, z[0-9]+\.d, z[0-9]+\.d\n} } } */ +/* { dg-final { scan-assembler-times {\tfmaxnmv\td[0-9]+, p[0-7], z[0-9]+\.d\n} 2 } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/fminnm_2.c b/gcc/testsuite/gcc.target/aarch64/sve/fminnm_2.c new file mode 100644 index 00000000000..10aced05f1a --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/fminnm_2.c @@ -0,0 +1,22 @@ +/* { dg-options "-O2 -ftree-vectorize -fno-vect-cost-model" } */ + +float +f1 (float x, float *ptr) +{ + for (int i = 0; i < 128; ++i) + x = __builtin_fminf (x, ptr[i]); + return x; +} + +double +f2 (double x, double *ptr) +{ + for (int i = 0; i < 128; ++i) + x = __builtin_fmin (x, ptr[i]); + return x; +} + +/* { dg-final { scan-assembler {\twhilelo\t(p[0-7])\.s,.*\tfminnm\tz[0-9]+\.s, \1/m, z[0-9]+\.s, z[0-9]+\.s\n} } } */ +/* { dg-final { scan-assembler-times {\tfminnmv\ts[0-9]+, p[0-7], z[0-9]+\.s\n} 1 } } */ +/* { dg-final { scan-assembler {\twhilelo\t(p[0-7])\.d,.*\tfminnm\tz[0-9]+\.d, \1/m, z[0-9]+\.d, z[0-9]+\.d\n} } } */ +/* { dg-final { scan-assembler-times {\tfminnmv\td[0-9]+, p[0-7], z[0-9]+\.d\n} 1 } } */ diff --git a/gcc/testsuite/gcc.target/aarch64/sve/fminnm_3.c b/gcc/testsuite/gcc.target/aarch64/sve/fminnm_3.c new file mode 100644 index 00000000000..80ad0160249 --- /dev/null +++ b/gcc/testsuite/gcc.target/aarch64/sve/fminnm_3.c @@ -0,0 +1,18 @@ +/* { dg-options "-O2 -ftree-vectorize -fno-vect-cost-model" } */ + +void +f (double *restrict res, double *restrict ptr) +{ + double x0 = res[0]; + double x1 = res[1]; + for (int i = 0; i < 128; i += 2) + { + x0 = __builtin_fmin (x0, ptr[i + 0]); + x1 = __builtin_fmin (x1, ptr[i + 1]); + } + res[0] = x0; + res[1] = x1; +} + +/* { dg-final { scan-assembler {\twhilelo\t(p[0-7])\.d,.*\tfminnm\tz[0-9]+\.d, \1/m, z[0-9]+\.d, z[0-9]+\.d\n} } } */ +/* { dg-final { scan-assembler-times {\tfminnmv\td[0-9]+, p[0-7], z[0-9]+\.d\n} 2 } } */ diff --git a/gcc/tree-vect-loop.c b/gcc/tree-vect-loop.c index cae895a88f2..726cda05e7a 100644 --- a/gcc/tree-vect-loop.c +++ b/gcc/tree-vect-loop.c @@ -3185,9 +3185,22 @@ reduction_fn_for_scalar_code (code_helper code, internal_fn *reduc_fn) return true; default: - break; - } - return false; + return false; + } + else + switch (combined_fn (code)) + { + CASE_CFN_FMAX: + *reduc_fn = IFN_REDUC_FMAX; + return true; + + CASE_CFN_FMIN: + *reduc_fn = IFN_REDUC_FMIN; + return true; + + default: + return false; + } } /* If there is a neutral value X such that a reduction would not be affected @@ -3223,9 +3236,18 @@ neutral_op_for_reduction (tree scalar_type, code_helper code, return initial_value; default: - break; + return NULL_TREE; + } + else + switch (combined_fn (code)) + { + CASE_CFN_FMIN: + CASE_CFN_FMAX: + return initial_value; + + default: + return NULL_TREE; } - return NULL_TREE; } /* Error reporting helper for vect_is_simple_reduction below. GIMPLE statement @@ -3255,9 +3277,18 @@ needs_fold_left_reduction_p (tree type, code_helper code) return false; default: - break; + return !flag_associative_math; + } + else + switch (combined_fn (code)) + { + CASE_CFN_FMIN: + CASE_CFN_FMAX: + return false; + + default: + return !flag_associative_math; } - return !flag_associative_math; } if (INTEGRAL_TYPE_P (type))