sra: Partial fix for BITINT_TYPEs [PR113120]

Message ID ZZ5m3bq5IlVoU9fK@tucnak
State New
Headers
Series sra: Partial fix for BITINT_TYPEs [PR113120] |

Checks

Context Check Description
linaro-tcwg-bot/tcwg_gcc_build--master-aarch64 fail Patch failed to apply
linaro-tcwg-bot/tcwg_gcc_build--master-arm fail Patch failed to apply

Commit Message

Jakub Jelinek Jan. 10, 2024, 9:43 a.m. UTC
  Hi!

As changed in other parts of the compiler, using
build_nonstandard_integer_type is not appropriate for arbitrary precisions,
especially if the precision comes from a BITINT_TYPE or something based on
that, build_nonstandard_integer_type relies on some integral mode being
supported that can support the precision.

The following patch uses build_bitint_type instead for BITINT_TYPE
precisions.

Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

Note, it would be good if we were able to punt on the optimization
(but this code doesn't seem to be able to punt, so it needs to be done
somewhere earlier) at least in cases where building it would be invalid.
E.g. right now BITINT_TYPE can support precisions up to 65535 (inclusive),
but 65536 will not work anymore (we can't have > 16-bit TYPE_PRECISION).
I've tried to replace 513 with 65532 in the testcase and it didn't ICE,
so maybe it ran into some other SRA limit.

2024-01-10  Jakub Jelinek  <jakub@redhat.com>

	PR tree-optimization/113120
	* tree-sra.cc (analyze_access_subtree): For BITINT_TYPE
	with root->size TYPE_PRECISION don't build anything new.
	Otherwise, if root->type is a BITINT_TYPE, use build_bitint_type
	rather than build_nonstandard_integer_type.

	* gcc.dg/bitint-63.c: New test.


	Jakub
  

Comments

Richard Biener Jan. 10, 2024, 9:51 a.m. UTC | #1
On Wed, 10 Jan 2024, Jakub Jelinek wrote:

> Hi!
> 
> As changed in other parts of the compiler, using
> build_nonstandard_integer_type is not appropriate for arbitrary precisions,
> especially if the precision comes from a BITINT_TYPE or something based on
> that, build_nonstandard_integer_type relies on some integral mode being
> supported that can support the precision.
> 
> The following patch uses build_bitint_type instead for BITINT_TYPE
> precisions.
> 
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

LGTM, see below for a question.

> Note, it would be good if we were able to punt on the optimization
> (but this code doesn't seem to be able to punt, so it needs to be done
> somewhere earlier) at least in cases where building it would be invalid.
> E.g. right now BITINT_TYPE can support precisions up to 65535 (inclusive),
> but 65536 will not work anymore (we can't have > 16-bit TYPE_PRECISION).
> I've tried to replace 513 with 65532 in the testcase and it didn't ICE,
> so maybe it ran into some other SRA limit.

I think SRA has a size limit, --param 
sra-max-scalarization-size-O{size,speed}, not sure if that is all or
the one that's hit.

> 2024-01-10  Jakub Jelinek  <jakub@redhat.com>
> 
> 	PR tree-optimization/113120
> 	* tree-sra.cc (analyze_access_subtree): For BITINT_TYPE
> 	with root->size TYPE_PRECISION don't build anything new.
> 	Otherwise, if root->type is a BITINT_TYPE, use build_bitint_type
> 	rather than build_nonstandard_integer_type.
> 
> 	* gcc.dg/bitint-63.c: New test.
> 
> --- gcc/tree-sra.cc.jj	2024-01-03 11:51:35.054682295 +0100
> +++ gcc/tree-sra.cc	2024-01-09 19:50:42.911500487 +0100
> @@ -2733,7 +2733,8 @@ analyze_access_subtree (struct access *r
>           For integral types this means the precision has to match.
>  	 Avoid assumptions based on the integral type kind, too.  */
>        if (INTEGRAL_TYPE_P (root->type)
> -	  && (TREE_CODE (root->type) != INTEGER_TYPE
> +	  && ((TREE_CODE (root->type) != INTEGER_TYPE
> +	       && TREE_CODE (root->type) != BITINT_TYPE)
>  	      || TYPE_PRECISION (root->type) != root->size)
>  	  /* But leave bitfield accesses alone.  */
>  	  && (TREE_CODE (root->expr) != COMPONENT_REF
> @@ -2742,8 +2743,11 @@ analyze_access_subtree (struct access *r
>  	  tree rt = root->type;
>  	  gcc_assert ((root->offset % BITS_PER_UNIT) == 0
>  		      && (root->size % BITS_PER_UNIT) == 0);
> -	  root->type = build_nonstandard_integer_type (root->size,
> -						       TYPE_UNSIGNED (rt));
> +	  if (TREE_CODE (root->type) == BITINT_TYPE)
> +	    root->type = build_bitint_type (root->size, TYPE_UNSIGNED (rt));

I suppose we don't exactly need to preserve BITINT-ness, say if
root->size fits the largest supported integer mode?  It's OK as-is
for now.

> +	  else
> +	    root->type = build_nonstandard_integer_type (root->size,
> +							 TYPE_UNSIGNED (rt));
>  	  root->expr = build_ref_for_offset (UNKNOWN_LOCATION, root->base,
>  					     root->offset, root->reverse,
>  					     root->type, NULL, false);
> --- gcc/testsuite/gcc.dg/bitint-63.c.jj	2024-01-09 20:08:04.831720434 +0100
> +++ gcc/testsuite/gcc.dg/bitint-63.c	2024-01-09 20:07:43.045029421 +0100
> @@ -0,0 +1,24 @@
> +/* PR tree-optimization/113120 */
> +/* { dg-do compile { target bitint } } */
> +/* { dg-require-stack-check "generic" } */
> +/* { dg-options "-std=c23 -O -fno-tree-fre --param=large-stack-frame=1024 -fstack-check=generic" } */
> +
> +#if __BITINT_MAXWIDTH__ >= 513
> +typedef _BitInt(513) B;
> +#else
> +typedef int B;
> +#endif
> +
> +static inline __attribute__((__always_inline__)) void
> +bar (B x)
> +{
> +  B y = x;
> +  if (y)
> +    __builtin_abort ();
> +}
> +
> +void
> +foo (void)
> +{
> +  bar (0);
> +}
> 
> 	Jakub
> 
>
  
Jakub Jelinek Jan. 10, 2024, 10:53 a.m. UTC | #2
On Wed, Jan 10, 2024 at 10:51:32AM +0100, Richard Biener wrote:
> > @@ -2742,8 +2743,11 @@ analyze_access_subtree (struct access *r
> >  	  tree rt = root->type;
> >  	  gcc_assert ((root->offset % BITS_PER_UNIT) == 0
> >  		      && (root->size % BITS_PER_UNIT) == 0);
> > -	  root->type = build_nonstandard_integer_type (root->size,
> > -						       TYPE_UNSIGNED (rt));
> > +	  if (TREE_CODE (root->type) == BITINT_TYPE)
> > +	    root->type = build_bitint_type (root->size, TYPE_UNSIGNED (rt));
> 
> I suppose we don't exactly need to preserve BITINT-ness, say if
> root->size fits the largest supported integer mode?  It's OK as-is

Sure, we could use INTEGER_TYPE in that case, but if we use BITINT_TYPE,
it won't do harm either, worst case it will be lowered to those
INTEGER_TYPEs later again.
What is IMHO important is not to introduce BITINT_TYPEs where they weren't
used before, we didn't need to use them before either.  And to use
BITINT_TYPEs for large ones which can't be expressed in INTEGER_TYPEs.

	Jakub
  
Martin Jambor Jan. 17, 2024, 2:46 p.m. UTC | #3
Hi,
On Wed, Jan 10 2024, Jakub Jelinek wrote:
> Hi!
>
> As changed in other parts of the compiler, using
> build_nonstandard_integer_type is not appropriate for arbitrary precisions,
> especially if the precision comes from a BITINT_TYPE or something based on
> that, build_nonstandard_integer_type relies on some integral mode being
> supported that can support the precision.
>
> The following patch uses build_bitint_type instead for BITINT_TYPE
> precisions.
>
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?
>
> Note, it would be good if we were able to punt on the optimization
> (but this code doesn't seem to be able to punt, so it needs to be done
> somewhere earlier) at least in cases where building it would be invalid.
> E.g. right now BITINT_TYPE can support precisions up to 65535 (inclusive),
> but 65536 will not work anymore (we can't have > 16-bit TYPE_PRECISION).
> I've tried to replace 513 with 65532 in the testcase and it didn't ICE,
> so maybe it ran into some other SRA limit.

Thank you very much for the patch.  Regarding punting, did you mean for
all BITINT_TYPEs or just for big ones, like you did when you fixed PR
11333 (thanks for that too) or something entirely else?

Martin

>
> 2024-01-10  Jakub Jelinek  <jakub@redhat.com>
>
> 	PR tree-optimization/113120
> 	* tree-sra.cc (analyze_access_subtree): For BITINT_TYPE
> 	with root->size TYPE_PRECISION don't build anything new.
> 	Otherwise, if root->type is a BITINT_TYPE, use build_bitint_type
> 	rather than build_nonstandard_integer_type.
>
> 	* gcc.dg/bitint-63.c: New test.
  
Jakub Jelinek Jan. 17, 2024, 2:52 p.m. UTC | #4
On Wed, Jan 17, 2024 at 03:46:44PM +0100, Martin Jambor wrote:
> > Note, it would be good if we were able to punt on the optimization
> > (but this code doesn't seem to be able to punt, so it needs to be done
> > somewhere earlier) at least in cases where building it would be invalid.
> > E.g. right now BITINT_TYPE can support precisions up to 65535 (inclusive),
> > but 65536 will not work anymore (we can't have > 16-bit TYPE_PRECISION).
> > I've tried to replace 513 with 65532 in the testcase and it didn't ICE,
> > so maybe it ran into some other SRA limit.
> 
> Thank you very much for the patch.  Regarding punting, did you mean for
> all BITINT_TYPEs or just for big ones, like you did when you fixed PR
> 11333 (thanks for that too) or something entirely else?

I meant what I did in PR113330, but still wonder if we really need to use
a root->size which is multiple of BITS_PER_UNIT (or words or whatever it
actually is), at least on little endian if the _BitInt starts at the start
of a memory.  See
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113408#c1
for more details, wonder if it just couldn't use _BitInt(713) in there
directly rather than _BitInt(768).

	Jakub
  

Patch

--- gcc/tree-sra.cc.jj	2024-01-03 11:51:35.054682295 +0100
+++ gcc/tree-sra.cc	2024-01-09 19:50:42.911500487 +0100
@@ -2733,7 +2733,8 @@  analyze_access_subtree (struct access *r
          For integral types this means the precision has to match.
 	 Avoid assumptions based on the integral type kind, too.  */
       if (INTEGRAL_TYPE_P (root->type)
-	  && (TREE_CODE (root->type) != INTEGER_TYPE
+	  && ((TREE_CODE (root->type) != INTEGER_TYPE
+	       && TREE_CODE (root->type) != BITINT_TYPE)
 	      || TYPE_PRECISION (root->type) != root->size)
 	  /* But leave bitfield accesses alone.  */
 	  && (TREE_CODE (root->expr) != COMPONENT_REF
@@ -2742,8 +2743,11 @@  analyze_access_subtree (struct access *r
 	  tree rt = root->type;
 	  gcc_assert ((root->offset % BITS_PER_UNIT) == 0
 		      && (root->size % BITS_PER_UNIT) == 0);
-	  root->type = build_nonstandard_integer_type (root->size,
-						       TYPE_UNSIGNED (rt));
+	  if (TREE_CODE (root->type) == BITINT_TYPE)
+	    root->type = build_bitint_type (root->size, TYPE_UNSIGNED (rt));
+	  else
+	    root->type = build_nonstandard_integer_type (root->size,
+							 TYPE_UNSIGNED (rt));
 	  root->expr = build_ref_for_offset (UNKNOWN_LOCATION, root->base,
 					     root->offset, root->reverse,
 					     root->type, NULL, false);
--- gcc/testsuite/gcc.dg/bitint-63.c.jj	2024-01-09 20:08:04.831720434 +0100
+++ gcc/testsuite/gcc.dg/bitint-63.c	2024-01-09 20:07:43.045029421 +0100
@@ -0,0 +1,24 @@ 
+/* PR tree-optimization/113120 */
+/* { dg-do compile { target bitint } } */
+/* { dg-require-stack-check "generic" } */
+/* { dg-options "-std=c23 -O -fno-tree-fre --param=large-stack-frame=1024 -fstack-check=generic" } */
+
+#if __BITINT_MAXWIDTH__ >= 513
+typedef _BitInt(513) B;
+#else
+typedef int B;
+#endif
+
+static inline __attribute__((__always_inline__)) void
+bar (B x)
+{
+  B y = x;
+  if (y)
+    __builtin_abort ();
+}
+
+void
+foo (void)
+{
+  bar (0);
+}