[take,#2] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as (X&Y)<<C.

Message ID 009a01d8ae94$c7d92800$578b7800$@nextmovesoftware.com
State Committed
Commit 03acd8b6429e22068330dce5abf129291d3f26de
Headers
Series [take,#2] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as (X&Y)<<C. |

Commit Message

Roger Sayle Aug. 12, 2022, 9:45 p.m. UTC
  Hi Richard,
Many thanks for the review and useful suggestions.  I (think I) agree that
handling non-canonical forms in value_numbering makes more sense,
so this revised patch is just the first (non-controversial) part of the original
submission, that incorporates your observation that it doesn't need to
be limited to (valid) constant shifts, and can be generalized to any
shift, without introducing undefined behaviour that didn't exist before.

This revised patch has been tested on x86_64-pc-linux-gnu with
make bootstrap and make -k check, both with and without
--target_board=unix{-m32} with no new failures.  Ok for mainline?


2022-08-12  Roger Sayle  <roger@nextmovesoftware.com>
            Richard Biener  <rguenther@suse.de>

gcc/ChangeLog
        PR tree-optimization/71343
        * match.pd (op (lshift @0 @1) (lshift @2 @1)): Optimize the
        expression (X<<C) + (Y<<C) to (X+Y)<<C for multiple operators.
        (op (rshift @0 @1) (rshift @2 @1)): Likewise, simplify (X>>C)^(Y>>C)
        to (X^Y)>>C for binary logical operators, AND, IOR and XOR.

gcc/testsuite/ChangeLog
        PR tree-optimization/71343
        * gcc.dg/pr71343-1.c: New test case.


Thanks,
Roger
--

> -----Original Message-----
> From: Richard Biener <richard.guenther@gmail.com>
> Sent: 08 August 2022 12:42
> To: Roger Sayle <roger@nextmovesoftware.com>
> Cc: GCC Patches <gcc-patches@gcc.gnu.org>
> Subject: Re: [PATCH] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as
> (X&Y)<<C.
> 
> On Mon, Aug 8, 2022 at 10:07 AM Roger Sayle
> <roger@nextmovesoftware.com> wrote:
> >
> >
> > This patch resolves PR tree-optimization/71343, a missed-optimization
> > enhancement request where GCC fails to see that (a<<2)+(b<<2) == a*4+b*4.
> > This requires two related (sets of) optimizations to be added to match.pd.
> >
> > The first is that (X<<C) op (Y<<C) can be simplified to (X op Y) << C,
> > for many binary operators, including AND, IOR, XOR, and (if overflow
> > isn't an issue) PLUS and MINUS.  Likewise, the right shifts (both
> > logical and arithmetic) and bit-wise logical operators can be
> > simplified in a similar fashion.  These all reduce the number of
> > GIMPLE binary operations from 3 to 2, by combining/eliminating a shift
> operation.
> >
> > The second optimization reflects that the middle-end doesn't impose a
> > canonical form on multiplications by powers of two, vs. left shifts,
> > instead leaving these operations as specified by the programmer unless
> > there's a good reason to change them.  Hence, GIMPLE code may contain
> > the expressions "X * 8" and "X << 3" even though these represent the
> > same value/computation.  The tweak to match.pd is that comparison
> > operations whose operands are equivalent non-canonical expressions can
> > be taught their equivalence.  Hence "(X * 8) == (X << 3)" will always
> > evaluate to true, and "(X<<2) > 4*X" will always evaluate to false.
> >
> > This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
> > and make -k check, both with and without --target_board=unix{-m32},
> > with no new failures.  Ok for mainline?
> 
> +/* Shifts by constants distribute over several binary operations,
> +   hence (X << C) + (Y << C) can be simplified to (X + Y) << C.  */
> +(for op (plus minus)
> +  (simplify
> +    (op (lshift:s @0 INTEGER_CST@1) (lshift:s @2 INTEGER_CST@1))
> +    (if (INTEGRAL_TYPE_P (type)
> +        && TYPE_OVERFLOW_WRAPS (type)
> +        && !TYPE_SATURATING (type)
> +        && tree_fits_shwi_p (@1)
> +        && tree_to_shwi (@1) > 0
> +        && tree_to_shwi (@1) < TYPE_PRECISION (type))
> 
> I do wonder why we need to restrict this to shifts by constants?
> Any out-of-bound shift was already there, no?
> 
> +/* Some tree expressions are intentionally non-canonical.
> +   We handle the comparison of the equivalent forms here.  */ (for cmp
> +(eq le ge)
> +  (simplify
> +    (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
> +    (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> +        && tree_fits_shwi_p (@1)
> +        && tree_to_shwi (@1) > 0
> +        && tree_to_shwi (@1) < TYPE_PRECISION  (TREE_TYPE (@0))
> +        && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
> +      { constant_boolean_node (true, type); })))
> +
> +(for cmp (ne lt gt)
> +  (simplify
> +    (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
> +    (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> +        && tree_fits_shwi_p (@1)
> +        && tree_to_shwi (@1) > 0
> +        && tree_to_shwi (@1) < TYPE_PRECISION  (TREE_TYPE (@0))
> +        && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
> +      { constant_boolean_node (false, type); })))
> 
> hmm.  I wonder if it makes more sense to handle this in value-numbering.
> tree-ssa-sccvn.cc:visit_nary_op handles some cases that are not exactly
> canonicalization issues but the shift vs mult could be handled there by just
> performing the alternate lookup.  That would also enable CSE and by means of
> that of course the comparisons you do above.
> 
> Thanks,
> Richard.
> 
> >
> > 2022-08-08  Roger Sayle  <roger@nextmovesoftware.com>
> >
> > gcc/ChangeLog
> >         PR tree-optimization/71343
> >         * match.pd (op (lshift @0 @1) (lshift @2 @1)): Optimize the
> >         expression (X<<C) + (Y<<C) to (X+Y)<<C for multiple operators.
> >         (op (rshift @0 @1) (rshift @2 @1)): Likwise, simplify (X>>C)^(Y>>C)
> >         to (X^Y)>>C for binary logical operators, AND, IOR and XOR.
> >         (cmp:c (lshift @0) (mult @1)): Optimize comparisons between
> >         shifts by integer constants and multiplications by powers of 2.
> >
> > gcc/testsuite/ChangeLog
> >         PR tree-optimization/71343
> >         * gcc.dg/pr71343-1.c: New test case.
> >         * gcc.dg/pr71343-2.c: Likewise.
> >
> >
> > Thanks in advance,
> > Roger
> > --
> >
  

Comments

Richard Biener Aug. 15, 2022, 7:48 a.m. UTC | #1
On Fri, Aug 12, 2022 at 11:45 PM Roger Sayle <roger@nextmovesoftware.com> wrote:
>
>
> Hi Richard,
> Many thanks for the review and useful suggestions.  I (think I) agree that
> handling non-canonical forms in value_numbering makes more sense,
> so this revised patch is just the first (non-controversial) part of the original
> submission, that incorporates your observation that it doesn't need to
> be limited to (valid) constant shifts, and can be generalized to any
> shift, without introducing undefined behaviour that didn't exist before.
>
> This revised patch has been tested on x86_64-pc-linux-gnu with
> make bootstrap and make -k check, both with and without
> --target_board=unix{-m32} with no new failures.  Ok for mainline?

OK.

Thanks,
Richard.

>
> 2022-08-12  Roger Sayle  <roger@nextmovesoftware.com>
>             Richard Biener  <rguenther@suse.de>
>
> gcc/ChangeLog
>         PR tree-optimization/71343
>         * match.pd (op (lshift @0 @1) (lshift @2 @1)): Optimize the
>         expression (X<<C) + (Y<<C) to (X+Y)<<C for multiple operators.
>         (op (rshift @0 @1) (rshift @2 @1)): Likewise, simplify (X>>C)^(Y>>C)
>         to (X^Y)>>C for binary logical operators, AND, IOR and XOR.
>
> gcc/testsuite/ChangeLog
>         PR tree-optimization/71343
>         * gcc.dg/pr71343-1.c: New test case.
>
>
> Thanks,
> Roger
> --
>
> > -----Original Message-----
> > From: Richard Biener <richard.guenther@gmail.com>
> > Sent: 08 August 2022 12:42
> > To: Roger Sayle <roger@nextmovesoftware.com>
> > Cc: GCC Patches <gcc-patches@gcc.gnu.org>
> > Subject: Re: [PATCH] PR tree-optimization/71343: Optimize (X<<C)&(Y<<C) as
> > (X&Y)<<C.
> >
> > On Mon, Aug 8, 2022 at 10:07 AM Roger Sayle
> > <roger@nextmovesoftware.com> wrote:
> > >
> > >
> > > This patch resolves PR tree-optimization/71343, a missed-optimization
> > > enhancement request where GCC fails to see that (a<<2)+(b<<2) == a*4+b*4.
> > > This requires two related (sets of) optimizations to be added to match.pd.
> > >
> > > The first is that (X<<C) op (Y<<C) can be simplified to (X op Y) << C,
> > > for many binary operators, including AND, IOR, XOR, and (if overflow
> > > isn't an issue) PLUS and MINUS.  Likewise, the right shifts (both
> > > logical and arithmetic) and bit-wise logical operators can be
> > > simplified in a similar fashion.  These all reduce the number of
> > > GIMPLE binary operations from 3 to 2, by combining/eliminating a shift
> > operation.
> > >
> > > The second optimization reflects that the middle-end doesn't impose a
> > > canonical form on multiplications by powers of two, vs. left shifts,
> > > instead leaving these operations as specified by the programmer unless
> > > there's a good reason to change them.  Hence, GIMPLE code may contain
> > > the expressions "X * 8" and "X << 3" even though these represent the
> > > same value/computation.  The tweak to match.pd is that comparison
> > > operations whose operands are equivalent non-canonical expressions can
> > > be taught their equivalence.  Hence "(X * 8) == (X << 3)" will always
> > > evaluate to true, and "(X<<2) > 4*X" will always evaluate to false.
> > >
> > > This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
> > > and make -k check, both with and without --target_board=unix{-m32},
> > > with no new failures.  Ok for mainline?
> >
> > +/* Shifts by constants distribute over several binary operations,
> > +   hence (X << C) + (Y << C) can be simplified to (X + Y) << C.  */
> > +(for op (plus minus)
> > +  (simplify
> > +    (op (lshift:s @0 INTEGER_CST@1) (lshift:s @2 INTEGER_CST@1))
> > +    (if (INTEGRAL_TYPE_P (type)
> > +        && TYPE_OVERFLOW_WRAPS (type)
> > +        && !TYPE_SATURATING (type)
> > +        && tree_fits_shwi_p (@1)
> > +        && tree_to_shwi (@1) > 0
> > +        && tree_to_shwi (@1) < TYPE_PRECISION (type))
> >
> > I do wonder why we need to restrict this to shifts by constants?
> > Any out-of-bound shift was already there, no?
> >
> > +/* Some tree expressions are intentionally non-canonical.
> > +   We handle the comparison of the equivalent forms here.  */ (for cmp
> > +(eq le ge)
> > +  (simplify
> > +    (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
> > +    (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> > +        && tree_fits_shwi_p (@1)
> > +        && tree_to_shwi (@1) > 0
> > +        && tree_to_shwi (@1) < TYPE_PRECISION  (TREE_TYPE (@0))
> > +        && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
> > +      { constant_boolean_node (true, type); })))
> > +
> > +(for cmp (ne lt gt)
> > +  (simplify
> > +    (cmp:c (lshift @0 INTEGER_CST@1) (mult @0 integer_pow2p@2))
> > +    (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> > +        && tree_fits_shwi_p (@1)
> > +        && tree_to_shwi (@1) > 0
> > +        && tree_to_shwi (@1) < TYPE_PRECISION  (TREE_TYPE (@0))
> > +        && wi::to_wide (@1) == wi::exact_log2 (wi::to_wide (@2)))
> > +      { constant_boolean_node (false, type); })))
> >
> > hmm.  I wonder if it makes more sense to handle this in value-numbering.
> > tree-ssa-sccvn.cc:visit_nary_op handles some cases that are not exactly
> > canonicalization issues but the shift vs mult could be handled there by just
> > performing the alternate lookup.  That would also enable CSE and by means of
> > that of course the comparisons you do above.
> >
> > Thanks,
> > Richard.
> >
> > >
> > > 2022-08-08  Roger Sayle  <roger@nextmovesoftware.com>
> > >
> > > gcc/ChangeLog
> > >         PR tree-optimization/71343
> > >         * match.pd (op (lshift @0 @1) (lshift @2 @1)): Optimize the
> > >         expression (X<<C) + (Y<<C) to (X+Y)<<C for multiple operators.
> > >         (op (rshift @0 @1) (rshift @2 @1)): Likwise, simplify (X>>C)^(Y>>C)
> > >         to (X^Y)>>C for binary logical operators, AND, IOR and XOR.
> > >         (cmp:c (lshift @0) (mult @1)): Optimize comparisons between
> > >         shifts by integer constants and multiplications by powers of 2.
> > >
> > > gcc/testsuite/ChangeLog
> > >         PR tree-optimization/71343
> > >         * gcc.dg/pr71343-1.c: New test case.
> > >         * gcc.dg/pr71343-2.c: Likewise.
> > >
> > >
> > > Thanks in advance,
> > > Roger
> > > --
> > >
  

Patch

diff --git a/gcc/match.pd b/gcc/match.pd
index f82f94a..7940f75 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -982,6 +982,26 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
        && tree_nop_conversion_p (type, TREE_TYPE (@1)))
    (lshift @0 @2)))
 
+/* Shifts by constants distribute over several binary operations,
+   hence (X << C) + (Y << C) can be simplified to (X + Y) << C.  */
+(for op (plus minus)
+  (simplify
+    (op (lshift:s @0 @1) (lshift:s @2 @1))
+    (if (INTEGRAL_TYPE_P (type)
+	 && TYPE_OVERFLOW_WRAPS (type)
+	 && !TYPE_SATURATING (type))
+      (lshift (op @0 @2) @1))))
+
+(for op (bit_and bit_ior bit_xor)
+  (simplify
+    (op (lshift:s @0 @1) (lshift:s @2 @1))
+    (if (INTEGRAL_TYPE_P (type))
+      (lshift (op @0 @2) @1)))
+  (simplify
+    (op (rshift:s @0 @1) (rshift:s @2 @1))
+    (if (INTEGRAL_TYPE_P (type))
+      (rshift (op @0 @2) @1))))
+
 /* Fold (1 << (C - x)) where C = precision(type) - 1
    into ((1 << C) >> x). */
 (simplify
diff --git a/gcc/testsuite/gcc.dg/pr71343-1.c b/gcc/testsuite/gcc.dg/pr71343-1.c
new file mode 100644
index 0000000..146f5fc
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/pr71343-1.c
@@ -0,0 +1,56 @@ 
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+unsigned int foo_plus(unsigned int a, unsigned int b)
+{
+  return (a << 2) + (b << 2);
+}
+
+unsigned int foo_and(unsigned int a, unsigned int b)
+{
+  return (a << 2) & (b << 2);
+}
+
+unsigned int foo_ior(unsigned int a, unsigned int b)
+{
+  return (a << 2) | (b << 2);
+}
+
+unsigned int foo_xor(unsigned int a, unsigned int b)
+{
+  return (a << 2) ^ (b << 2);
+}
+
+unsigned int bar_and(unsigned int a, unsigned int b)
+{
+  return (a >> 2) & (b >> 2);
+}
+
+unsigned int bar_ior(unsigned int a, unsigned int b)
+{
+  return (a >> 2) | (b >> 2);
+}
+
+unsigned int bar_xor(unsigned int a, unsigned int b)
+{
+  return (a >> 2) ^ (b >> 2);
+}
+
+int baz_and(int a, int b)
+{
+  return (a >> 2) & (b >> 2);
+}
+
+int baz_ior(int a, int b)
+{
+  return (a >> 2) | (b >> 2);
+}
+
+int baz_xor(int a, int b)
+{
+  return (a >> 2) ^ (b >> 2);
+}
+
+/* { dg-final { scan-tree-dump-times " << " 4 "optimized" } } */
+/* { dg-final { scan-tree-dump-times " >> " 6 "optimized" } } */
+