PR tree-optimization/94026: Simplify (X>>8)&6 != 0 as X&1536 != 0.
Commit Message
This patch implements the missed optimization described in PR 94026,
where a the shift can be eliminated from the sequence of a shift,
followed by a bit-wise AND followed by an equality/inequality test.
Specifically, ((X << C1) & C2) cmp C3 into (X & (C2 >> C1)) cmp (C3 >> C1)
and likewise ((X >> C1) & C2) cmp C3 into (X & (C2 << C1)) cmp (C3 << C1)
where cmp is == or !=, and C1, C2 and C3 are integer constants.
The example in the subject line is taken from the hot function
self_atari from the Go program Leela (in SPEC CPU 2017).
This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
and make -k check, both with and without --target_board=unix{-m32}, with
no new failures, OK for mainline?
2022-06-24 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR tree-optimization/94026
* match.pd (((X << C1) & C2) eq/ne C3): New simplification.
(((X >> C1) & C2) eq/ne C3): Likewise.
gcc/testsuite/ChangeLog
PR tree-optimization/94026
* gcc.dg/pr94026.c: New test case.
Thanks in advance,
Roger
--
Comments
On 6/24/2022 9:09 AM, Roger Sayle wrote:
> This patch implements the missed optimization described in PR 94026,
> where a the shift can be eliminated from the sequence of a shift,
> followed by a bit-wise AND followed by an equality/inequality test.
> Specifically, ((X << C1) & C2) cmp C3 into (X & (C2 >> C1)) cmp (C3 >> C1)
> and likewise ((X >> C1) & C2) cmp C3 into (X & (C2 << C1)) cmp (C3 << C1)
> where cmp is == or !=, and C1, C2 and C3 are integer constants.
> The example in the subject line is taken from the hot function
> self_atari from the Go program Leela (in SPEC CPU 2017).
>
> This patch has been tested on x86_64-pc-linux-gnu with make bootstrap
> and make -k check, both with and without --target_board=unix{-m32}, with
> no new failures, OK for mainline?
>
>
> 2022-06-24 Roger Sayle <roger@nextmovesoftware.com>
>
> gcc/ChangeLog
> PR tree-optimization/94026
> * match.pd (((X << C1) & C2) eq/ne C3): New simplification.
> (((X >> C1) & C2) eq/ne C3): Likewise.
>
> gcc/testsuite/ChangeLog
> PR tree-optimization/94026
> * gcc.dg/pr94026.c: New test case.
OK. But please check if we still need this code from fold-const.c:
/* Fold ((X >> C1) & C2) == 0 and ((X >> C1) & C2) != 0 where
C1 is a valid shift constant, and C2 is a power of two, i.e.
a single bit. */
if (TREE_CODE (arg0) == BIT_AND_EXPR
&& integer_pow2p (TREE_OPERAND (arg0, 1))
&& integer_zerop (arg1))
[ ... ]
There's a whole series of transformations that are done for equality
comparisons where one side is a constant and the other is combination of
logicals & shifting. Some (like the one noted above) are likely
redundant now. Others may fit better into the match.pd framework rather
than fold-const.
Anyway, the patch is fine, but please take a looksie at the referenced
cases in fold-const.c and see if there's any cleanup/refactoring we
ought to be doing there.
jeff
@@ -3559,6 +3559,29 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
&& wi::lshift (wi::to_wide (@0), cand) == wi::to_wide (@2))
(cmp @1 { build_int_cst (TREE_TYPE (@1), cand); }))))))
+/* Fold ((X << C1) & C2) cmp C3 into (X & (C2 >> C1)) cmp (C3 >> C1)
+ ((X >> C1) & C2) cmp C3 into (X & (C2 << C1)) cmp (C3 << C1). */
+(for cmp (ne eq)
+ (simplify
+ (cmp (bit_and:s (lshift:s @0 INTEGER_CST@1) INTEGER_CST@2) INTEGER_CST@3)
+ (if (tree_fits_shwi_p (@1)
+ && tree_to_shwi (@1) > 0
+ && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
+ && tree_to_shwi (@1) <= wi::ctz (wi::to_wide (@3)))
+ (with { wide_int c1 = wi::to_wide (@1);
+ wide_int c2 = wi::lrshift (wi::to_wide (@2), c1);
+ wide_int c3 = wi::lrshift (wi::to_wide (@3), c1); }
+ (cmp (bit_and @0 { wide_int_to_tree (TREE_TYPE (@0), c2); })
+ { wide_int_to_tree (TREE_TYPE (@0), c3); }))))
+ (simplify
+ (cmp (bit_and:s (rshift:s @0 INTEGER_CST@1) INTEGER_CST@2) INTEGER_CST@3)
+ (if (tree_fits_shwi_p (@1)
+ && tree_to_shwi (@1) > 0
+ && tree_to_shwi (@1) < TYPE_PRECISION (TREE_TYPE (@0))
+ && tree_to_shwi (@1) <= wi::clz (wi::to_wide (@2))
+ && tree_to_shwi (@1) <= wi::clz (wi::to_wide (@3)))
+ (cmp (bit_and @0 (lshift @2 @1)) (lshift @3 @1)))))
+
/* Fold (X << C1) & C2 into (X << C1) & (C2 | ((1 << C1) - 1))
(X >> C1) & C2 into (X >> C1) & (C2 | ~((type) -1 >> C1))
if the new mask might be further optimized. */
new file mode 100644
@@ -0,0 +1,21 @@
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+
+int f1(int x) { return ((x >> 8) & 6) != 0; }
+int f2(int x) { return ((x << 2) & 24) != 0; }
+int f3(unsigned x) { return ((x << 2) & 15) != 0; }
+int f4(unsigned x) { return ((x >> 2) & 14) != 0; }
+
+int fifth (int c)
+{
+ int a = (c >> 8) & 7;
+
+ if (a >= 2) {
+ return 1;
+ } else {
+ return 0;
+ }
+}
+/* { dg-final { scan-tree-dump-not " << " "optimized" } } */
+/* { dg-final { scan-tree-dump-not " >> " "optimized" } } */
+