aarch64: Lower vcombine to GIMPLE

Message ID AS8PR08MB6678EAB7EC0876AC754FA1BBF4A59@AS8PR08MB6678.eurprd08.prod.outlook.com
State New
Headers
Series aarch64: Lower vcombine to GIMPLE |

Commit Message

Andrew Carlotti June 7, 2022, 5:23 p.m. UTC
  Hi all,

This lowers vcombine intrinsics to a GIMPLE vector constructor, which enables better optimisation during GIMPLE passes.

Bootstrapped and tested on aarch64-none-linux-gnu, and tested for aarch64_be-none-linux-gnu via cross-compilation.


gcc/

	* config/aarch64/aarch64-builtins.c
	(aarch64_general_gimple_fold_builtin): Add combine.

gcc/testsuite/

	* gcc.target/aarch64/advsimd-intrinsics/combine.c:
	New test.

---
  

Comments

Richard Sandiford June 10, 2022, 7 a.m. UTC | #1
Andrew Carlotti via Gcc-patches <gcc-patches@gcc.gnu.org> writes:
> Hi all,
>
> This lowers vcombine intrinsics to a GIMPLE vector constructor, which enables better optimisation during GIMPLE passes.
>
> Bootstrapped and tested on aarch64-none-linux-gnu, and tested for aarch64_be-none-linux-gnu via cross-compilation.
>
>
> gcc/
>
> 	* config/aarch64/aarch64-builtins.c
> 	(aarch64_general_gimple_fold_builtin): Add combine.
>
> gcc/testsuite/
>
> 	* gcc.target/aarch64/advsimd-intrinsics/combine.c:
> 	New test.
>
> ---
>
> diff --git a/gcc/config/aarch64/aarch64-builtins.cc b/gcc/config/aarch64/aarch64-builtins.cc
> index 5217dbdb2ac78bba0a669d22af6d769d1fe91a3d..9d52fb8c5a48c9b743defb340a85fb20a1c8f014 100644
> --- a/gcc/config/aarch64/aarch64-builtins.cc
> +++ b/gcc/config/aarch64/aarch64-builtins.cc
> @@ -2827,6 +2827,18 @@ aarch64_general_gimple_fold_builtin (unsigned int fcode, gcall *stmt,
>         gimple_call_set_lhs (new_stmt, gimple_call_lhs (stmt));
>         break;
>
> +     BUILTIN_VDC (BINOP, combine, 0, AUTO_FP)
> +     BUILTIN_VD_I (BINOPU, combine, 0, NONE)
> +     BUILTIN_VDC_P (BINOPP, combine, 0, NONE)
> +       {
> +         if (BYTES_BIG_ENDIAN)
> +           std::swap(args[0], args[1]);

We probably shouldn't do this swap in-place, since args refers directly
to the gimple statement.

> +         tree ret_type = TREE_TYPE (gimple_call_lhs (stmt));
> +         tree ctor = build_constructor_va (ret_type, 2, NULL_TREE, args[0], NULL_TREE, args[1]);

Minor formatting nit: lines should be under 80 chars.

Looks good otherwise, thanks, and sorry for the slow review.

Richard

> +         new_stmt = gimple_build_assign (gimple_call_lhs (stmt), ctor);
> +       }
> +       break;
> +
>       /*lower store and load neon builtins to gimple.  */
>       BUILTIN_VALL_F16 (LOAD1, ld1, 0, LOAD)
>       BUILTIN_VDQ_I (LOAD1_U, ld1, 0, LOAD)
> diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/combine.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/combine.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..d08faf7a4a160a1e83428ed9b270731bbf7b8c8a
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/combine.c
> @@ -0,0 +1,18 @@
> +/* { dg-do compile { target { aarch64*-*-* } } } */
> +/* { dg-final { check-function-bodies "**" "" {-O[^0]} } } */
> +/* { dg-skip-if "" { *-*-* } { "-fno-fat-lto-objects" } } */
> +
> +#include <arm_neon.h>
> +
> +/*
> +** foo:
> +**     umov    w0, v1\.s\[1\]
> +**     ret
> +*/
> +
> +int32_t foo (int32x2_t a, int32x2_t b)
> +{
> +  int32x4_t c = vcombine_s32(a, b);
> +  return vgetq_lane_s32(c, 3);
> +}
> +
  
Richard Biener June 13, 2022, 10:51 a.m. UTC | #2
On Tue, Jun 7, 2022 at 7:24 PM Andrew Carlotti via Gcc-patches
<gcc-patches@gcc.gnu.org> wrote:
>
> Hi all,
>
> This lowers vcombine intrinsics to a GIMPLE vector constructor, which enables better optimisation during GIMPLE passes.
>
> Bootstrapped and tested on aarch64-none-linux-gnu, and tested for aarch64_be-none-linux-gnu via cross-compilation.
>
>
> gcc/
>
>         * config/aarch64/aarch64-builtins.c
>         (aarch64_general_gimple_fold_builtin): Add combine.
>
> gcc/testsuite/
>
>         * gcc.target/aarch64/advsimd-intrinsics/combine.c:
>         New test.
>
> ---
>
> diff --git a/gcc/config/aarch64/aarch64-builtins.cc b/gcc/config/aarch64/aarch64-builtins.cc
> index 5217dbdb2ac78bba0a669d22af6d769d1fe91a3d..9d52fb8c5a48c9b743defb340a85fb20a1c8f014 100644
> --- a/gcc/config/aarch64/aarch64-builtins.cc
> +++ b/gcc/config/aarch64/aarch64-builtins.cc
> @@ -2827,6 +2827,18 @@ aarch64_general_gimple_fold_builtin (unsigned int fcode, gcall *stmt,
>         gimple_call_set_lhs (new_stmt, gimple_call_lhs (stmt));
>         break;
>
> +     BUILTIN_VDC (BINOP, combine, 0, AUTO_FP)
> +     BUILTIN_VD_I (BINOPU, combine, 0, NONE)
> +     BUILTIN_VDC_P (BINOPP, combine, 0, NONE)
> +       {
> +         if (BYTES_BIG_ENDIAN)
> +           std::swap(args[0], args[1]);
> +         tree ret_type = TREE_TYPE (gimple_call_lhs (stmt));
> +         tree ctor = build_constructor_va (ret_type, 2, NULL_TREE, args[0], NULL_TREE, args[1]);
> +         new_stmt = gimple_build_assign (gimple_call_lhs (stmt), ctor);

the LHS might be NULL (that seems to be a general issue in this
function), x86 the simply
leaves the builtin alone.

> +       }
> +       break;
> +
>       /*lower store and load neon builtins to gimple.  */
>       BUILTIN_VALL_F16 (LOAD1, ld1, 0, LOAD)
>       BUILTIN_VDQ_I (LOAD1_U, ld1, 0, LOAD)
> diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/combine.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/combine.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..d08faf7a4a160a1e83428ed9b270731bbf7b8c8a
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/combine.c
> @@ -0,0 +1,18 @@
> +/* { dg-do compile { target { aarch64*-*-* } } } */
> +/* { dg-final { check-function-bodies "**" "" {-O[^0]} } } */
> +/* { dg-skip-if "" { *-*-* } { "-fno-fat-lto-objects" } } */
> +
> +#include <arm_neon.h>
> +
> +/*
> +** foo:
> +**     umov    w0, v1\.s\[1\]
> +**     ret
> +*/
> +
> +int32_t foo (int32x2_t a, int32x2_t b)
> +{
> +  int32x4_t c = vcombine_s32(a, b);
> +  return vgetq_lane_s32(c, 3);
> +}
> +
  

Patch

diff --git a/gcc/config/aarch64/aarch64-builtins.cc b/gcc/config/aarch64/aarch64-builtins.cc
index 5217dbdb2ac78bba0a669d22af6d769d1fe91a3d..9d52fb8c5a48c9b743defb340a85fb20a1c8f014 100644
--- a/gcc/config/aarch64/aarch64-builtins.cc
+++ b/gcc/config/aarch64/aarch64-builtins.cc
@@ -2827,6 +2827,18 @@  aarch64_general_gimple_fold_builtin (unsigned int fcode, gcall *stmt,
        gimple_call_set_lhs (new_stmt, gimple_call_lhs (stmt));
        break;

+     BUILTIN_VDC (BINOP, combine, 0, AUTO_FP)
+     BUILTIN_VD_I (BINOPU, combine, 0, NONE)
+     BUILTIN_VDC_P (BINOPP, combine, 0, NONE)
+       {
+         if (BYTES_BIG_ENDIAN)
+           std::swap(args[0], args[1]);
+         tree ret_type = TREE_TYPE (gimple_call_lhs (stmt));
+         tree ctor = build_constructor_va (ret_type, 2, NULL_TREE, args[0], NULL_TREE, args[1]);
+         new_stmt = gimple_build_assign (gimple_call_lhs (stmt), ctor);
+       }
+       break;
+
      /*lower store and load neon builtins to gimple.  */
      BUILTIN_VALL_F16 (LOAD1, ld1, 0, LOAD)
      BUILTIN_VDQ_I (LOAD1_U, ld1, 0, LOAD)
diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/combine.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/combine.c
new file mode 100644
index 0000000000000000000000000000000000000000..d08faf7a4a160a1e83428ed9b270731bbf7b8c8a
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/combine.c
@@ -0,0 +1,18 @@ 
+/* { dg-do compile { target { aarch64*-*-* } } } */
+/* { dg-final { check-function-bodies "**" "" {-O[^0]} } } */
+/* { dg-skip-if "" { *-*-* } { "-fno-fat-lto-objects" } } */
+
+#include <arm_neon.h>
+
+/*
+** foo:
+**     umov    w0, v1\.s\[1\]
+**     ret
+*/
+
+int32_t foo (int32x2_t a, int32x2_t b)
+{
+  int32x4_t c = vcombine_s32(a, b);
+  return vgetq_lane_s32(c, 3);
+}
+