[v2,1/2] aarch64: Don't return invalid GIMPLE assign statements

Message ID Ys2HV59rnPbTMhBZ@e124511.cambridge.arm.com
State Committed
Commit e9cad1e582950d129aba3465b65c2231f94bb6c0
Headers
Series [v2,1/2] aarch64: Don't return invalid GIMPLE assign statements |

Commit Message

Andrew Carlotti July 12, 2022, 2:38 p.m. UTC
  aarch64_general_gimple_fold_builtin doesn't check whether the LHS of a
function call is null before converting it to an assign statement. To avoid
returning an invalid GIMPLE statement in this case, we instead assign the
expression result to a new (unused) variable.

This change only affects code that:
1) Calls an intrinsic function that has no side effects;
2) Does not use or store the value returned by the intrinsic;
3) Uses parameters that prevent the front-end eliminating the call prior to
gimplification.

The ICE is unlikely to have occurred in the wild, as it relies on the presence
of a redundant intrinsic call.

gcc/ChangeLog:

 * config/aarch64/aarch64-builtins.cc
 (aarch64_general_gimple_fold_builtin): Add fixup for invalid GIMPLE.

gcc/testsuite/ChangeLog:

 * gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c: New test.

---
  

Comments

Richard Biener July 13, 2022, 7:41 a.m. UTC | #1
On Tue, Jul 12, 2022 at 4:38 PM Andrew Carlotti <andrew.carlotti@arm.com> wrote:
>
> aarch64_general_gimple_fold_builtin doesn't check whether the LHS of a
> function call is null before converting it to an assign statement. To avoid
> returning an invalid GIMPLE statement in this case, we instead assign the
> expression result to a new (unused) variable.
>
> This change only affects code that:
> 1) Calls an intrinsic function that has no side effects;
> 2) Does not use or store the value returned by the intrinsic;
> 3) Uses parameters that prevent the front-end eliminating the call prior to
> gimplification.
>
> The ICE is unlikely to have occurred in the wild, as it relies on the presence
> of a redundant intrinsic call.

Other targets usually simply refrain from folding intrinsic calls with no LHS.
Another option is to just drop it on the floor if it does not have any
side-effects which for the gimple_fold_builtin hook means folding it to
a GIMPLE_NOP (gimple_build_nop ()).

> gcc/ChangeLog:
>
>  * config/aarch64/aarch64-builtins.cc
>  (aarch64_general_gimple_fold_builtin): Add fixup for invalid GIMPLE.
>
> gcc/testsuite/ChangeLog:
>
>  * gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c: New test.
>
> ---
>
> diff --git a/gcc/config/aarch64/aarch64-builtins.cc b/gcc/config/aarch64/aarch64-builtins.cc
> index e0a741ac663188713e21f457affa57217d074783..5753988a9964967c27a03aca5fddb9025fd8ed6e 100644
> --- a/gcc/config/aarch64/aarch64-builtins.cc
> +++ b/gcc/config/aarch64/aarch64-builtins.cc
> @@ -3022,6 +3022,16 @@ aarch64_general_gimple_fold_builtin (unsigned int fcode, gcall *stmt,
>      default:
>        break;
>      }
> +
> +  /* GIMPLE assign statements (unlike calls) require a non-null lhs. If we
> +     created an assign statement with a null lhs, then fix this by assigning
> +     to a new (and subsequently unused) variable. */
> +  if (new_stmt && is_gimple_assign (new_stmt) && !gimple_assign_lhs (new_stmt))
> +    {
> +      tree new_lhs = make_ssa_name (gimple_call_return_type (stmt));
> +      gimple_assign_set_lhs (new_stmt, new_lhs);
> +    }
> +
>    return new_stmt;
>  }
>
> diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
> new file mode 100644
> index 0000000000000000000000000000000000000000..345307456b175307f5cb22de5e59cfc6254f2737
> --- /dev/null
> +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
> @@ -0,0 +1,9 @@
> +/* { dg-do compile { target { aarch64*-*-* } } } */
> +
> +#include <arm_neon.h>
> +
> +int8_t *bar();
> +
> +void foo() {
> +  __builtin_aarch64_ld1v16qi(bar());
> +}
  
Richard Sandiford July 13, 2022, 8:10 a.m. UTC | #2
Richard Biener via Gcc-patches <gcc-patches@gcc.gnu.org> writes:
> On Tue, Jul 12, 2022 at 4:38 PM Andrew Carlotti <andrew.carlotti@arm.com> wrote:
>>
>> aarch64_general_gimple_fold_builtin doesn't check whether the LHS of a
>> function call is null before converting it to an assign statement. To avoid
>> returning an invalid GIMPLE statement in this case, we instead assign the
>> expression result to a new (unused) variable.
>>
>> This change only affects code that:
>> 1) Calls an intrinsic function that has no side effects;
>> 2) Does not use or store the value returned by the intrinsic;
>> 3) Uses parameters that prevent the front-end eliminating the call prior to
>> gimplification.
>>
>> The ICE is unlikely to have occurred in the wild, as it relies on the presence
>> of a redundant intrinsic call.
>
> Other targets usually simply refrain from folding intrinsic calls with no LHS.
> Another option is to just drop it on the floor if it does not have any
> side-effects which for the gimple_fold_builtin hook means folding it to
> a GIMPLE_NOP (gimple_build_nop ()).

Sorry, I just pushed the patch before seeing this.

I guess the problem with refraining from folding calls with no lhs
is that it has to be done on a per-function basis.  (E.g. stores
should still be folded.)  It then becomes something that we need
to remember for each individual call.  E.g. ix86_gimple_fold_builtin
seems to have three different pieces of code for handling null lhses,
even with its heavy use of gotos.

So a nice thing about the current patch is that it handles all this
in one place only.

Thanks,
Richard

>> gcc/ChangeLog:
>>
>>  * config/aarch64/aarch64-builtins.cc
>>  (aarch64_general_gimple_fold_builtin): Add fixup for invalid GIMPLE.
>>
>> gcc/testsuite/ChangeLog:
>>
>>  * gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c: New test.
>>
>> ---
>>
>> diff --git a/gcc/config/aarch64/aarch64-builtins.cc b/gcc/config/aarch64/aarch64-builtins.cc
>> index e0a741ac663188713e21f457affa57217d074783..5753988a9964967c27a03aca5fddb9025fd8ed6e 100644
>> --- a/gcc/config/aarch64/aarch64-builtins.cc
>> +++ b/gcc/config/aarch64/aarch64-builtins.cc
>> @@ -3022,6 +3022,16 @@ aarch64_general_gimple_fold_builtin (unsigned int fcode, gcall *stmt,
>>      default:
>>        break;
>>      }
>> +
>> +  /* GIMPLE assign statements (unlike calls) require a non-null lhs. If we
>> +     created an assign statement with a null lhs, then fix this by assigning
>> +     to a new (and subsequently unused) variable. */
>> +  if (new_stmt && is_gimple_assign (new_stmt) && !gimple_assign_lhs (new_stmt))
>> +    {
>> +      tree new_lhs = make_ssa_name (gimple_call_return_type (stmt));
>> +      gimple_assign_set_lhs (new_stmt, new_lhs);
>> +    }
>> +
>>    return new_stmt;
>>  }
>>
>> diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
>> new file mode 100644
>> index 0000000000000000000000000000000000000000..345307456b175307f5cb22de5e59cfc6254f2737
>> --- /dev/null
>> +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
>> @@ -0,0 +1,9 @@
>> +/* { dg-do compile { target { aarch64*-*-* } } } */
>> +
>> +#include <arm_neon.h>
>> +
>> +int8_t *bar();
>> +
>> +void foo() {
>> +  __builtin_aarch64_ld1v16qi(bar());
>> +}
  
Andrew Carlotti July 13, 2022, 10:50 a.m. UTC | #3
On Wed, Jul 13, 2022 at 09:10:25AM +0100, Richard Sandiford wrote:
> Richard Biener via Gcc-patches <gcc-patches@gcc.gnu.org> writes:
> > On Tue, Jul 12, 2022 at 4:38 PM Andrew Carlotti <andrew.carlotti@arm.com> wrote:
> >>
> >> aarch64_general_gimple_fold_builtin doesn't check whether the LHS of a
> >> function call is null before converting it to an assign statement. To avoid
> >> returning an invalid GIMPLE statement in this case, we instead assign the
> >> expression result to a new (unused) variable.
> >>
> >> This change only affects code that:
> >> 1) Calls an intrinsic function that has no side effects;
> >> 2) Does not use or store the value returned by the intrinsic;
> >> 3) Uses parameters that prevent the front-end eliminating the call prior to
> >> gimplification.
> >>
> >> The ICE is unlikely to have occurred in the wild, as it relies on the presence
> >> of a redundant intrinsic call.
> >
> > Other targets usually simply refrain from folding intrinsic calls with no LHS.
> > Another option is to just drop it on the floor if it does not have any
> > side-effects which for the gimple_fold_builtin hook means folding it to
> > a GIMPLE_NOP (gimple_build_nop ()).
> 
> Sorry, I just pushed the patch before seeing this.
> 
> I guess the problem with refraining from folding calls with no lhs
> is that it has to be done on a per-function basis.  (E.g. stores
> should still be folded.)  It then becomes something that we need
> to remember for each individual call.  E.g. ix86_gimple_fold_builtin
> seems to have three different pieces of code for handling null lhses,
> even with its heavy use of gotos.
> 
> So a nice thing about the current patch is that it handles all this
> in one place only.
> 
> Thanks,
> Richard

I specifically wanted to avoid not folding the call, because always
folding means that the builtin doesn't need to be implemented anywhere
else (which isn't relevant here, but may become relevant when folding
newly defined builtins in the future).

I considered dropping the statement, but I wasn't sure at the time that
I could do it safely. I could send a patch to instead replace new_stmt
with a GIMPLE_NOP.

> >> gcc/ChangeLog:
> >>
> >>  * config/aarch64/aarch64-builtins.cc
> >>  (aarch64_general_gimple_fold_builtin): Add fixup for invalid GIMPLE.
> >>
> >> gcc/testsuite/ChangeLog:
> >>
> >>  * gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c: New test.
> >>
> >> ---
> >>
> >> diff --git a/gcc/config/aarch64/aarch64-builtins.cc b/gcc/config/aarch64/aarch64-builtins.cc
> >> index e0a741ac663188713e21f457affa57217d074783..5753988a9964967c27a03aca5fddb9025fd8ed6e 100644
> >> --- a/gcc/config/aarch64/aarch64-builtins.cc
> >> +++ b/gcc/config/aarch64/aarch64-builtins.cc
> >> @@ -3022,6 +3022,16 @@ aarch64_general_gimple_fold_builtin (unsigned int fcode, gcall *stmt,
> >>      default:
> >>        break;
> >>      }
> >> +
> >> +  /* GIMPLE assign statements (unlike calls) require a non-null lhs. If we
> >> +     created an assign statement with a null lhs, then fix this by assigning
> >> +     to a new (and subsequently unused) variable. */
> >> +  if (new_stmt && is_gimple_assign (new_stmt) && !gimple_assign_lhs (new_stmt))
> >> +    {
> >> +      tree new_lhs = make_ssa_name (gimple_call_return_type (stmt));
> >> +      gimple_assign_set_lhs (new_stmt, new_lhs);
> >> +    }
> >> +
> >>    return new_stmt;
> >>  }
> >>
> >> diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
> >> new file mode 100644
> >> index 0000000000000000000000000000000000000000..345307456b175307f5cb22de5e59cfc6254f2737
> >> --- /dev/null
> >> +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
> >> @@ -0,0 +1,9 @@
> >> +/* { dg-do compile { target { aarch64*-*-* } } } */
> >> +
> >> +#include <arm_neon.h>
> >> +
> >> +int8_t *bar();
> >> +
> >> +void foo() {
> >> +  __builtin_aarch64_ld1v16qi(bar());
> >> +}
  
Richard Biener July 13, 2022, 12:32 p.m. UTC | #4
On Wed, Jul 13, 2022 at 12:50 PM Andrew Carlotti
<andrew.carlotti@arm.com> wrote:
>
> On Wed, Jul 13, 2022 at 09:10:25AM +0100, Richard Sandiford wrote:
> > Richard Biener via Gcc-patches <gcc-patches@gcc.gnu.org> writes:
> > > On Tue, Jul 12, 2022 at 4:38 PM Andrew Carlotti <andrew.carlotti@arm.com> wrote:
> > >>
> > >> aarch64_general_gimple_fold_builtin doesn't check whether the LHS of a
> > >> function call is null before converting it to an assign statement. To avoid
> > >> returning an invalid GIMPLE statement in this case, we instead assign the
> > >> expression result to a new (unused) variable.
> > >>
> > >> This change only affects code that:
> > >> 1) Calls an intrinsic function that has no side effects;
> > >> 2) Does not use or store the value returned by the intrinsic;
> > >> 3) Uses parameters that prevent the front-end eliminating the call prior to
> > >> gimplification.
> > >>
> > >> The ICE is unlikely to have occurred in the wild, as it relies on the presence
> > >> of a redundant intrinsic call.
> > >
> > > Other targets usually simply refrain from folding intrinsic calls with no LHS.
> > > Another option is to just drop it on the floor if it does not have any
> > > side-effects which for the gimple_fold_builtin hook means folding it to
> > > a GIMPLE_NOP (gimple_build_nop ()).
> >
> > Sorry, I just pushed the patch before seeing this.
> >
> > I guess the problem with refraining from folding calls with no lhs
> > is that it has to be done on a per-function basis.  (E.g. stores
> > should still be folded.)  It then becomes something that we need
> > to remember for each individual call.  E.g. ix86_gimple_fold_builtin
> > seems to have three different pieces of code for handling null lhses,
> > even with its heavy use of gotos.
> >
> > So a nice thing about the current patch is that it handles all this
> > in one place only.

True, I don't much like the x86 way but then who cares about
intrinsic uses without a LHS ...

> > Thanks,
> > Richard
>
> I specifically wanted to avoid not folding the call, because always
> folding means that the builtin doesn't need to be implemented anywhere
> else (which isn't relevant here, but may become relevant when folding
> newly defined builtins in the future).
>
> I considered dropping the statement, but I wasn't sure at the time that
> I could do it safely. I could send a patch to instead replace new_stmt
> with a GIMPLE_NOP.

If you can be sure there's no side-effect on the RHS then I think
I'd prefer that over allocating an SSA name for something that's
going to be DCEd anyway.

Richard.

> > >> gcc/ChangeLog:
> > >>
> > >>  * config/aarch64/aarch64-builtins.cc
> > >>  (aarch64_general_gimple_fold_builtin): Add fixup for invalid GIMPLE.
> > >>
> > >> gcc/testsuite/ChangeLog:
> > >>
> > >>  * gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c: New test.
> > >>
> > >> ---
> > >>
> > >> diff --git a/gcc/config/aarch64/aarch64-builtins.cc b/gcc/config/aarch64/aarch64-builtins.cc
> > >> index e0a741ac663188713e21f457affa57217d074783..5753988a9964967c27a03aca5fddb9025fd8ed6e 100644
> > >> --- a/gcc/config/aarch64/aarch64-builtins.cc
> > >> +++ b/gcc/config/aarch64/aarch64-builtins.cc
> > >> @@ -3022,6 +3022,16 @@ aarch64_general_gimple_fold_builtin (unsigned int fcode, gcall *stmt,
> > >>      default:
> > >>        break;
> > >>      }
> > >> +
> > >> +  /* GIMPLE assign statements (unlike calls) require a non-null lhs. If we
> > >> +     created an assign statement with a null lhs, then fix this by assigning
> > >> +     to a new (and subsequently unused) variable. */
> > >> +  if (new_stmt && is_gimple_assign (new_stmt) && !gimple_assign_lhs (new_stmt))
> > >> +    {
> > >> +      tree new_lhs = make_ssa_name (gimple_call_return_type (stmt));
> > >> +      gimple_assign_set_lhs (new_stmt, new_lhs);
> > >> +    }
> > >> +
> > >>    return new_stmt;
> > >>  }
> > >>
> > >> diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
> > >> new file mode 100644
> > >> index 0000000000000000000000000000000000000000..345307456b175307f5cb22de5e59cfc6254f2737
> > >> --- /dev/null
> > >> +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
> > >> @@ -0,0 +1,9 @@
> > >> +/* { dg-do compile { target { aarch64*-*-* } } } */
> > >> +
> > >> +#include <arm_neon.h>
> > >> +
> > >> +int8_t *bar();
> > >> +
> > >> +void foo() {
> > >> +  __builtin_aarch64_ld1v16qi(bar());
> > >> +}
  
Andrew Carlotti July 15, 2022, 2:18 p.m. UTC | #5
On Wed, Jul 13, 2022 at 02:32:16PM +0200, Richard Biener wrote:
> On Wed, Jul 13, 2022 at 12:50 PM Andrew Carlotti
> <andrew.carlotti@arm.com> wrote:
> > I specifically wanted to avoid not folding the call, because always
> > folding means that the builtin doesn't need to be implemented anywhere
> > else (which isn't relevant here, but may become relevant when folding
> > newly defined builtins in the future).
> >
> > I considered dropping the statement, but I wasn't sure at the time that
> > I could do it safely. I could send a patch to instead replace new_stmt
> > with a GIMPLE_NOP.
> 
> If you can be sure there's no side-effect on the RHS then I think
> I'd prefer that over allocating an SSA name for something that's
> going to be DCEd anyway.
> 
> Richard.

I discussed this off-list with Richard Sandiford, and we agreed that it
would be better to leave this code as it is.

The only time this form is likely to arise is if the statement has
side-effects (e.g. reading from volatile memory or triggering
floating-point exceptions), in which case we can't just replace it with
a nop. On the other hand, in the event that someone has written an
entirely redundant statement, then it will quickly get eliminated
anyway.

Adding code to distinguish between the two cases here, or to handle
the hard case, is unnecessary and wouldn't be worthwhile.

> > > >> gcc/ChangeLog:
> > > >>
> > > >>  * config/aarch64/aarch64-builtins.cc
> > > >>  (aarch64_general_gimple_fold_builtin): Add fixup for invalid GIMPLE.
> > > >>
> > > >> gcc/testsuite/ChangeLog:
> > > >>
> > > >>  * gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c: New test.
> > > >>
> > > >> ---
> > > >>
> > > >> diff --git a/gcc/config/aarch64/aarch64-builtins.cc b/gcc/config/aarch64/aarch64-builtins.cc
> > > >> index e0a741ac663188713e21f457affa57217d074783..5753988a9964967c27a03aca5fddb9025fd8ed6e 100644
> > > >> --- a/gcc/config/aarch64/aarch64-builtins.cc
> > > >> +++ b/gcc/config/aarch64/aarch64-builtins.cc
> > > >> @@ -3022,6 +3022,16 @@ aarch64_general_gimple_fold_builtin (unsigned int fcode, gcall *stmt,
> > > >>      default:
> > > >>        break;
> > > >>      }
> > > >> +
> > > >> +  /* GIMPLE assign statements (unlike calls) require a non-null lhs. If we
> > > >> +     created an assign statement with a null lhs, then fix this by assigning
> > > >> +     to a new (and subsequently unused) variable. */
> > > >> +  if (new_stmt && is_gimple_assign (new_stmt) && !gimple_assign_lhs (new_stmt))
> > > >> +    {
> > > >> +      tree new_lhs = make_ssa_name (gimple_call_return_type (stmt));
> > > >> +      gimple_assign_set_lhs (new_stmt, new_lhs);
> > > >> +    }
> > > >> +
> > > >>    return new_stmt;
> > > >>  }
> > > >>
> > > >> diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
> > > >> new file mode 100644
> > > >> index 0000000000000000000000000000000000000000..345307456b175307f5cb22de5e59cfc6254f2737
> > > >> --- /dev/null
> > > >> +++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
> > > >> @@ -0,0 +1,9 @@
> > > >> +/* { dg-do compile { target { aarch64*-*-* } } } */
> > > >> +
> > > >> +#include <arm_neon.h>
> > > >> +
> > > >> +int8_t *bar();
> > > >> +
> > > >> +void foo() {
> > > >> +  __builtin_aarch64_ld1v16qi(bar());
> > > >> +}
  

Patch

diff --git a/gcc/config/aarch64/aarch64-builtins.cc b/gcc/config/aarch64/aarch64-builtins.cc
index e0a741ac663188713e21f457affa57217d074783..5753988a9964967c27a03aca5fddb9025fd8ed6e 100644
--- a/gcc/config/aarch64/aarch64-builtins.cc
+++ b/gcc/config/aarch64/aarch64-builtins.cc
@@ -3022,6 +3022,16 @@  aarch64_general_gimple_fold_builtin (unsigned int fcode, gcall *stmt,
     default:
       break;
     }
+
+  /* GIMPLE assign statements (unlike calls) require a non-null lhs. If we
+     created an assign statement with a null lhs, then fix this by assigning
+     to a new (and subsequently unused) variable. */
+  if (new_stmt && is_gimple_assign (new_stmt) && !gimple_assign_lhs (new_stmt))
+    {
+      tree new_lhs = make_ssa_name (gimple_call_return_type (stmt));
+      gimple_assign_set_lhs (new_stmt, new_lhs);
+    }
+
   return new_stmt;
 }
 
diff --git a/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
new file mode 100644
index 0000000000000000000000000000000000000000..345307456b175307f5cb22de5e59cfc6254f2737
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/advsimd-intrinsics/ignored_return_1.c
@@ -0,0 +1,9 @@ 
+/* { dg-do compile { target { aarch64*-*-* } } } */
+
+#include <arm_neon.h>
+
+int8_t *bar();
+
+void foo() {
+  __builtin_aarch64_ld1v16qi(bar());
+}