[take,#2] Fold truncations of left shifts in match.pd

Message ID 007e01d878cd$2a32f100$7e98d300$@nextmovesoftware.com
State New
Headers
Series [take,#2] Fold truncations of left shifts in match.pd |

Commit Message

Roger Sayle June 5, 2022, 11:12 a.m. UTC
  Hi Richard,
Many thanks for taking the time to explain how vectorization is supposed
to work.  I now see that vect_recog_rotate_pattern in tree-vect-patterns.cc
is supposed to handle lowering of rotations to (vector) shifts, and
completely agree that adding support for signed types (using appropriate
casts to unsigned_type_for and casting the result back to the original
signed type) is a better approach to avoid the regression of pr98674.c.

I've also implemented your suggestions of combining the proposed new
(convert (lshift @1 INTEGER_CST@2)) with the existing one, and at the
same time including support for valid shifts greater than the narrower
type, such as (short)(x << 20),  to constant zero.  Although this optimization
is already performed during the tree-ssa passes, it's convenient to
also catch it here during constant folding.

This revised patch has been tested on x86_64-pc-linux-gnu with
make bootstrap and make -k check, both with and without
--target_board=unix{-m32}, with no new failures.  Ok for mainline?

2022-06-05  Roger Sayle  <roger@nextmovesoftware.com>
            Richard Biener  <rguenther@suse.de>

gcc/ChangeLog
        * match.pd (convert (lshift @1 INTEGER_CST@2)): Narrow integer
        left shifts by a constant when the result is truncated, and the
        shift constant is well-defined.
        * tree-vect-patterns.cc (vect_recog_rotate_pattern): Add
        support for rotations of signed integer types, by lowering
        using unsigned vector shifts.

gcc/testsuite/ChangeLog
        * gcc.dg/fold-convlshift-4.c: New test case.
        * gcc.dg/optimize-bswaphi-1.c: Update found bswap count.
        * gcc.dg/tree-ssa/pr61839_3.c: Shift is now optimized before VRP.
        * gcc.dg/vect/vect-over-widen-1-big-array.c: Remove obsolete tests.
        * gcc.dg/vect/vect-over-widen-1.c: Likewise.
        * gcc.dg/vect/vect-over-widen-3-big-array.c: Likewise.
        * gcc.dg/vect/vect-over-widen-3.c: Likewise.
        * gcc.dg/vect/vect-over-widen-4-big-array.c: Likewise.
        * gcc.dg/vect/vect-over-widen-4.c: Likewise.


Thanks again,
Roger
--

> -----Original Message-----
> From: Richard Biener <richard.guenther@gmail.com>
> Sent: 02 June 2022 12:03
> To: Roger Sayle <roger@nextmovesoftware.com>
> Cc: GCC Patches <gcc-patches@gcc.gnu.org>
> Subject: Re: [PATCH] Fold truncations of left shifts in match.pd
> 
> On Thu, Jun 2, 2022 at 12:55 PM Roger Sayle <roger@nextmovesoftware.com>
> wrote:
> >
> >
> > Hi Richard,
> > > +  /* RTL expansion knows how to expand rotates using shift/or.  */
> > > + if (icode == CODE_FOR_nothing
> > > +      && (code == LROTATE_EXPR || code == RROTATE_EXPR)
> > > +      && optab_handler (ior_optab, vec_mode) != CODE_FOR_nothing
> > > +      && optab_handler (ashl_optab, vec_mode) != CODE_FOR_nothing)
> > > +    icode = (int) optab_handler (lshr_optab, vec_mode);
> > >
> > > but we then get the vector costing wrong.
> >
> > The issue is that we currently get the (relative) vector costing wrong.
> > Currently for gcc.dg/vect/pr98674.c, the vectorizer thinks the scalar
> > code requires two shifts and an ior, so believes its profitable to
> > vectorize this loop using two vector shifts and an vector ior.  But
> > once match.pd simplifies the truncate and recognizes the HImode rotate we
> end up with:
> >
> > pr98674.c:6:16: note:   ==> examining statement: _6 = _1 r>> 8;
> > pr98674.c:6:16: note:   vect_is_simple_use: vectype vector(8) short int
> > pr98674.c:6:16: note:   vect_is_simple_use: operand 8, type of def: constant
> > pr98674.c:6:16: missed:   op not supported by target.
> > pr98674.c:8:33: missed:   not vectorized: relevant stmt not supported: _6 = _1
> r>> 8;
> > pr98674.c:6:16: missed:  bad operation or unsupported loop bound.
> >
> >
> > Clearly, it's a win to vectorize HImode rotates, when the backend can
> > perform
> > 8 (or 16) rotations at a time, but using 3 vector instructions, even
> > when a scalar rotate can performed in a single instruction.
> > Fundamentally, vectorization may still be desirable/profitable even when the
> backend doesn't provide an optab.
> 
> Yes, as said it's tree-vect-patterns.cc job to handle this not natively supported
> rotate by re-writing it.  Can you check why vect_recog_rotate_pattern does not
> do this?  Ah, the code only handles !TYPE_UNSIGNED (type) - not sure why
> though (for rotates it should not matter and for the lowered sequence we can
> convert to desired signedness to get arithmetic/logical shifts)?
> 
> > The current situation where the i386's backend provides expanders to
> > lower rotations (or vcond) into individual instruction sequences, also interferes
> with
> > vector costing.   It's the vector cost function that needs to be fixed, not the
> > generated code made worse (or the backend bloated performing its own
> > RTL expansion workarounds).
> >
> > Is it instead ok to mark pr98674.c as XFAIL (a regression)?
> > The tweak to tree-vect-stmts.cc was based on the assumption that we
> > wished to continue vectorizing this loop.  Improving scalar code
> > generation really shouldn't disable vectorization like this.
> 
> Yes, see above where the fix needs to be.  The pattern will then expose the shift
> and ior to the vectorizer which then are properly costed.
> 
> Richard.
> 
> >
> >
> > Cheers,
> > Roger
> > --
> >
> >
  

Comments

Richard Biener June 14, 2022, 1:41 p.m. UTC | #1
On Sun, Jun 5, 2022 at 1:12 PM Roger Sayle <roger@nextmovesoftware.com> wrote:
>
>
> Hi Richard,
> Many thanks for taking the time to explain how vectorization is supposed
> to work.  I now see that vect_recog_rotate_pattern in tree-vect-patterns.cc
> is supposed to handle lowering of rotations to (vector) shifts, and
> completely agree that adding support for signed types (using appropriate
> casts to unsigned_type_for and casting the result back to the original
> signed type) is a better approach to avoid the regression of pr98674.c.
>
> I've also implemented your suggestions of combining the proposed new
> (convert (lshift @1 INTEGER_CST@2)) with the existing one, and at the
> same time including support for valid shifts greater than the narrower
> type, such as (short)(x << 20),  to constant zero.  Although this optimization
> is already performed during the tree-ssa passes, it's convenient to
> also catch it here during constant folding.
>
> This revised patch has been tested on x86_64-pc-linux-gnu with
> make bootstrap and make -k check, both with and without
> --target_board=unix{-m32}, with no new failures.  Ok for mainline?

OK.

Thanks,
Richard.

> 2022-06-05  Roger Sayle  <roger@nextmovesoftware.com>
>             Richard Biener  <rguenther@suse.de>
>
> gcc/ChangeLog
>         * match.pd (convert (lshift @1 INTEGER_CST@2)): Narrow integer
>         left shifts by a constant when the result is truncated, and the
>         shift constant is well-defined.
>         * tree-vect-patterns.cc (vect_recog_rotate_pattern): Add
>         support for rotations of signed integer types, by lowering
>         using unsigned vector shifts.
>
> gcc/testsuite/ChangeLog
>         * gcc.dg/fold-convlshift-4.c: New test case.
>         * gcc.dg/optimize-bswaphi-1.c: Update found bswap count.
>         * gcc.dg/tree-ssa/pr61839_3.c: Shift is now optimized before VRP.
>         * gcc.dg/vect/vect-over-widen-1-big-array.c: Remove obsolete tests.
>         * gcc.dg/vect/vect-over-widen-1.c: Likewise.
>         * gcc.dg/vect/vect-over-widen-3-big-array.c: Likewise.
>         * gcc.dg/vect/vect-over-widen-3.c: Likewise.
>         * gcc.dg/vect/vect-over-widen-4-big-array.c: Likewise.
>         * gcc.dg/vect/vect-over-widen-4.c: Likewise.
>
>
> Thanks again,
> Roger
> --
>
> > -----Original Message-----
> > From: Richard Biener <richard.guenther@gmail.com>
> > Sent: 02 June 2022 12:03
> > To: Roger Sayle <roger@nextmovesoftware.com>
> > Cc: GCC Patches <gcc-patches@gcc.gnu.org>
> > Subject: Re: [PATCH] Fold truncations of left shifts in match.pd
> >
> > On Thu, Jun 2, 2022 at 12:55 PM Roger Sayle <roger@nextmovesoftware.com>
> > wrote:
> > >
> > >
> > > Hi Richard,
> > > > +  /* RTL expansion knows how to expand rotates using shift/or.  */
> > > > + if (icode == CODE_FOR_nothing
> > > > +      && (code == LROTATE_EXPR || code == RROTATE_EXPR)
> > > > +      && optab_handler (ior_optab, vec_mode) != CODE_FOR_nothing
> > > > +      && optab_handler (ashl_optab, vec_mode) != CODE_FOR_nothing)
> > > > +    icode = (int) optab_handler (lshr_optab, vec_mode);
> > > >
> > > > but we then get the vector costing wrong.
> > >
> > > The issue is that we currently get the (relative) vector costing wrong.
> > > Currently for gcc.dg/vect/pr98674.c, the vectorizer thinks the scalar
> > > code requires two shifts and an ior, so believes its profitable to
> > > vectorize this loop using two vector shifts and an vector ior.  But
> > > once match.pd simplifies the truncate and recognizes the HImode rotate we
> > end up with:
> > >
> > > pr98674.c:6:16: note:   ==> examining statement: _6 = _1 r>> 8;
> > > pr98674.c:6:16: note:   vect_is_simple_use: vectype vector(8) short int
> > > pr98674.c:6:16: note:   vect_is_simple_use: operand 8, type of def: constant
> > > pr98674.c:6:16: missed:   op not supported by target.
> > > pr98674.c:8:33: missed:   not vectorized: relevant stmt not supported: _6 = _1
> > r>> 8;
> > > pr98674.c:6:16: missed:  bad operation or unsupported loop bound.
> > >
> > >
> > > Clearly, it's a win to vectorize HImode rotates, when the backend can
> > > perform
> > > 8 (or 16) rotations at a time, but using 3 vector instructions, even
> > > when a scalar rotate can performed in a single instruction.
> > > Fundamentally, vectorization may still be desirable/profitable even when the
> > backend doesn't provide an optab.
> >
> > Yes, as said it's tree-vect-patterns.cc job to handle this not natively supported
> > rotate by re-writing it.  Can you check why vect_recog_rotate_pattern does not
> > do this?  Ah, the code only handles !TYPE_UNSIGNED (type) - not sure why
> > though (for rotates it should not matter and for the lowered sequence we can
> > convert to desired signedness to get arithmetic/logical shifts)?
> >
> > > The current situation where the i386's backend provides expanders to
> > > lower rotations (or vcond) into individual instruction sequences, also interferes
> > with
> > > vector costing.   It's the vector cost function that needs to be fixed, not the
> > > generated code made worse (or the backend bloated performing its own
> > > RTL expansion workarounds).
> > >
> > > Is it instead ok to mark pr98674.c as XFAIL (a regression)?
> > > The tweak to tree-vect-stmts.cc was based on the assumption that we
> > > wished to continue vectorizing this loop.  Improving scalar code
> > > generation really shouldn't disable vectorization like this.
> >
> > Yes, see above where the fix needs to be.  The pattern will then expose the shift
> > and ior to the vectorizer which then are properly costed.
> >
> > Richard.
> >
> > >
> > >
> > > Cheers,
> > > Roger
> > > --
> > >
> > >
  

Patch

diff --git a/gcc/match.pd b/gcc/match.pd
index 2d3ffc4..bbcf9e2 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -3621,17 +3621,18 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
     (if (integer_zerop (@2) || integer_all_onesp (@2))
      (cmp @0 @2)))))
 
-/* Both signed and unsigned lshift produce the same result, so use
-   the form that minimizes the number of conversions.  Postpone this
-   transformation until after shifts by zero have been folded.  */
+/* Narrow a lshift by constant.  */
 (simplify
- (convert (lshift:s@0 (convert:s@1 @2) INTEGER_CST@3))
+ (convert (lshift:s@0 @1 INTEGER_CST@2))
  (if (INTEGRAL_TYPE_P (type)
-      && tree_nop_conversion_p (type, TREE_TYPE (@0))
-      && INTEGRAL_TYPE_P (TREE_TYPE (@2))
-      && TYPE_PRECISION (TREE_TYPE (@2)) <= TYPE_PRECISION (type)
-      && !integer_zerop (@3))
-  (lshift (convert @2) @3)))
+      && INTEGRAL_TYPE_P (TREE_TYPE (@0))
+      && !integer_zerop (@2)
+      && TYPE_PRECISION (type) <= TYPE_PRECISION (TREE_TYPE (@0)))
+  (if (TYPE_PRECISION (type) == TYPE_PRECISION (TREE_TYPE (@0))
+       || wi::ltu_p (wi::to_wide (@2), TYPE_PRECISION (type)))
+   (lshift (convert @1) @2)
+   (if (wi::ltu_p (wi::to_wide (@2), TYPE_PRECISION (TREE_TYPE (@0))))
+    { build_zero_cst (type); }))))
 
 /* Simplifications of conversions.  */
 
diff --git a/gcc/tree-vect-patterns.cc b/gcc/tree-vect-patterns.cc
index 0fad4db..8f62486 100644
--- a/gcc/tree-vect-patterns.cc
+++ b/gcc/tree-vect-patterns.cc
@@ -2614,8 +2614,7 @@  vect_recog_rotate_pattern (vec_info *vinfo,
 	  || TYPE_PRECISION (TREE_TYPE (lhs)) != 16
 	  || TYPE_PRECISION (type) <= 16
 	  || TREE_CODE (oprnd0) != SSA_NAME
-	  || BITS_PER_UNIT != 8
-	  || !TYPE_UNSIGNED (TREE_TYPE (lhs)))
+	  || BITS_PER_UNIT != 8)
 	return NULL;
 
       stmt_vec_info def_stmt_info;
@@ -2688,8 +2687,7 @@  vect_recog_rotate_pattern (vec_info *vinfo,
 
   if (TREE_CODE (oprnd0) != SSA_NAME
       || TYPE_PRECISION (TREE_TYPE (lhs)) != TYPE_PRECISION (type)
-      || !INTEGRAL_TYPE_P (type)
-      || !TYPE_UNSIGNED (type))
+      || !INTEGRAL_TYPE_P (type))
     return NULL;
 
   stmt_vec_info def_stmt_info;
@@ -2745,31 +2743,36 @@  vect_recog_rotate_pattern (vec_info *vinfo,
 	goto use_rotate;
     }
 
+  tree utype = unsigned_type_for (type);
+  tree uvectype = get_vectype_for_scalar_type (vinfo, utype);
+  if (!uvectype)
+    return NULL;
+
   /* If vector/vector or vector/scalar shifts aren't supported by the target,
      don't do anything here either.  */
-  optab1 = optab_for_tree_code (LSHIFT_EXPR, vectype, optab_vector);
-  optab2 = optab_for_tree_code (RSHIFT_EXPR, vectype, optab_vector);
+  optab1 = optab_for_tree_code (LSHIFT_EXPR, uvectype, optab_vector);
+  optab2 = optab_for_tree_code (RSHIFT_EXPR, uvectype, optab_vector);
   if (!optab1
-      || optab_handler (optab1, TYPE_MODE (vectype)) == CODE_FOR_nothing
+      || optab_handler (optab1, TYPE_MODE (uvectype)) == CODE_FOR_nothing
       || !optab2
-      || optab_handler (optab2, TYPE_MODE (vectype)) == CODE_FOR_nothing)
+      || optab_handler (optab2, TYPE_MODE (uvectype)) == CODE_FOR_nothing)
     {
       if (! is_a <bb_vec_info> (vinfo) && dt == vect_internal_def)
 	return NULL;
-      optab1 = optab_for_tree_code (LSHIFT_EXPR, vectype, optab_scalar);
-      optab2 = optab_for_tree_code (RSHIFT_EXPR, vectype, optab_scalar);
+      optab1 = optab_for_tree_code (LSHIFT_EXPR, uvectype, optab_scalar);
+      optab2 = optab_for_tree_code (RSHIFT_EXPR, uvectype, optab_scalar);
       if (!optab1
-	  || optab_handler (optab1, TYPE_MODE (vectype)) == CODE_FOR_nothing
+	  || optab_handler (optab1, TYPE_MODE (uvectype)) == CODE_FOR_nothing
 	  || !optab2
-	  || optab_handler (optab2, TYPE_MODE (vectype)) == CODE_FOR_nothing)
+	  || optab_handler (optab2, TYPE_MODE (uvectype)) == CODE_FOR_nothing)
 	return NULL;
     }
 
   *type_out = vectype;
 
-  if (bswap16_p && !useless_type_conversion_p (type, TREE_TYPE (oprnd0)))
+  if (!useless_type_conversion_p (utype, TREE_TYPE (oprnd0)))
     {
-      def = vect_recog_temp_ssa_var (type, NULL);
+      def = vect_recog_temp_ssa_var (utype, NULL);
       def_stmt = gimple_build_assign (def, NOP_EXPR, oprnd0);
       append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt);
       oprnd0 = def;
@@ -2779,7 +2782,7 @@  vect_recog_rotate_pattern (vec_info *vinfo,
     ext_def = vect_get_external_def_edge (vinfo, oprnd1);
 
   def = NULL_TREE;
-  scalar_int_mode mode = SCALAR_INT_TYPE_MODE (type);
+  scalar_int_mode mode = SCALAR_INT_TYPE_MODE (utype);
   if (dt != vect_internal_def || TYPE_MODE (TREE_TYPE (oprnd1)) == mode)
     def = oprnd1;
   else if (def_stmt && gimple_assign_cast_p (def_stmt))
@@ -2793,7 +2796,7 @@  vect_recog_rotate_pattern (vec_info *vinfo,
 
   if (def == NULL_TREE)
     {
-      def = vect_recog_temp_ssa_var (type, NULL);
+      def = vect_recog_temp_ssa_var (utype, NULL);
       def_stmt = gimple_build_assign (def, NOP_EXPR, oprnd1);
       append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt);
     }
@@ -2839,13 +2842,13 @@  vect_recog_rotate_pattern (vec_info *vinfo,
 	append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt, vecstype);
     }
 
-  var1 = vect_recog_temp_ssa_var (type, NULL);
+  var1 = vect_recog_temp_ssa_var (utype, NULL);
   def_stmt = gimple_build_assign (var1, rhs_code == LROTATE_EXPR
 					? LSHIFT_EXPR : RSHIFT_EXPR,
 				  oprnd0, def);
   append_pattern_def_seq (vinfo, stmt_vinfo, def_stmt);
 
-  var2 = vect_recog_temp_ssa_var (type, NULL);
+  var2 = vect_recog_temp_ssa_var (utype, NULL);
   def_stmt = gimple_build_assign (var2, rhs_code == LROTATE_EXPR
 					? RSHIFT_EXPR : LSHIFT_EXPR,
 				  oprnd0, def2);
@@ -2855,9 +2858,15 @@  vect_recog_rotate_pattern (vec_info *vinfo,
   vect_pattern_detected ("vect_recog_rotate_pattern", last_stmt);
 
   /* Pattern supported.  Create a stmt to be used to replace the pattern.  */
-  var = vect_recog_temp_ssa_var (type, NULL);
+  var = vect_recog_temp_ssa_var (utype, NULL);
   pattern_stmt = gimple_build_assign (var, BIT_IOR_EXPR, var1, var2);
 
+  if (!useless_type_conversion_p (type, utype))
+    {
+      append_pattern_def_seq (vinfo, stmt_vinfo, pattern_stmt);
+      tree result = vect_recog_temp_ssa_var (type, NULL);
+      pattern_stmt = gimple_build_assign (result, NOP_EXPR, var);
+    }
   return pattern_stmt;
 }
 
diff --git a/gcc/testsuite/gcc.dg/fold-convlshift-4.c b/gcc/testsuite/gcc.dg/fold-convlshift-4.c
new file mode 100644
index 0000000..001627f
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/fold-convlshift-4.c
@@ -0,0 +1,9 @@ 
+/* { dg-do compile } */
+/* { dg-options "-O2 -fdump-tree-optimized" } */
+short foo(short x)
+{
+  return x << 5;
+}
+
+/* { dg-final { scan-tree-dump-not "\\(int\\)" "optimized" } } */
+/* { dg-final { scan-tree-dump-not "\\(short int\\)" "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/optimize-bswaphi-1.c b/gcc/testsuite/gcc.dg/optimize-bswaphi-1.c
index d045da9..a5d8bfd 100644
--- a/gcc/testsuite/gcc.dg/optimize-bswaphi-1.c
+++ b/gcc/testsuite/gcc.dg/optimize-bswaphi-1.c
@@ -68,4 +68,4 @@  get_unaligned_16_be (unsigned char *p)
 
 
 /* { dg-final { scan-tree-dump-times "16 bit load in target endianness found at" 4 "bswap" } } */
-/* { dg-final { scan-tree-dump-times "16 bit bswap implementation found at" 5 "bswap" } } */
+/* { dg-final { scan-tree-dump-times "16 bit bswap implementation found at" 4 "bswap" } } */
diff --git a/gcc/testsuite/gcc.dg/tree-ssa/pr61839_3.c b/gcc/testsuite/gcc.dg/tree-ssa/pr61839_3.c
index bc2126f..38cf792 100644
--- a/gcc/testsuite/gcc.dg/tree-ssa/pr61839_3.c
+++ b/gcc/testsuite/gcc.dg/tree-ssa/pr61839_3.c
@@ -1,6 +1,6 @@ 
 /* PR tree-optimization/61839.  */
 /* { dg-do run } */
-/* { dg-options "-O2 -fdump-tree-vrp -fdump-tree-optimized -fdisable-tree-ethread -fdisable-tree-threadfull1" } */
+/* { dg-options "-O2 -fdump-tree-optimized -fdisable-tree-ethread -fdisable-tree-threadfull1" } */
 
 __attribute__ ((noinline))
 int foo (int a, unsigned b)
@@ -21,6 +21,4 @@  int main ()
   foo (-1, b);
 }
 
-/* Scan for c [12, 13] << 8 in function foo.  */
-/* { dg-final { scan-tree-dump-times "3072 : 3328" 1  "vrp1" } } */
 /* { dg-final { scan-tree-dump-times "3072" 0  "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-1-big-array.c b/gcc/testsuite/gcc.dg/vect/vect-over-widen-1-big-array.c
index 9e5f464..9a5141ee 100644
--- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-1-big-array.c
+++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-1-big-array.c
@@ -58,9 +58,7 @@  int main (void)
 }
 
 /* { dg-final { scan-tree-dump-times "vect_recog_widen_shift_pattern: detected" 2 "vect" { target vect_widen_shift } } } */
-/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 3} "vect" } } */
 /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 3} "vect" } } */
-/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 8} "vect" } } */
 /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 5} "vect" } } */
 /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } */
 
diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-1.c b/gcc/testsuite/gcc.dg/vect/vect-over-widen-1.c
index c2d0797..f2d284c 100644
--- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-1.c
+++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-1.c
@@ -62,9 +62,7 @@  int main (void)
 }
 
 /* { dg-final { scan-tree-dump-times "vect_recog_widen_shift_pattern: detected" 2 "vect" { target vect_widen_shift } } } */
-/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 3} "vect" } } */
 /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 3} "vect" } } */
-/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 8} "vect" } } */
 /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 5} "vect" } } */
 /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } */
 
diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-3-big-array.c b/gcc/testsuite/gcc.dg/vect/vect-over-widen-3-big-array.c
index 37da7c9..6f89aac 100644
--- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-3-big-array.c
+++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-3-big-array.c
@@ -59,9 +59,7 @@  int main (void)
   return 0;
 }
 
-/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 3} "vect" } } */
 /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 3} "vect" } } */
 /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 8} "vect" } } */
-/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 9} "vect" } } */
 /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } */
 
diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-3.c b/gcc/testsuite/gcc.dg/vect/vect-over-widen-3.c
index 4138480..a1e1182 100644
--- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-3.c
+++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-3.c
@@ -57,9 +57,7 @@  int main (void)
   return 0;
 }
 
-/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 3} "vect" } } */
 /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 3} "vect" } } */
 /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 8} "vect" } } */
-/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 9} "vect" } } */
 /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } */
 
diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-4-big-array.c b/gcc/testsuite/gcc.dg/vect/vect-over-widen-4-big-array.c
index 514337c..03a6e67 100644
--- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-4-big-array.c
+++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-4-big-array.c
@@ -62,9 +62,7 @@  int main (void)
 }
 
 /* { dg-final { scan-tree-dump-times "vect_recog_widen_shift_pattern: detected" 2 "vect" { target vect_widen_shift } } } */
-/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 3} "vect" } } */
 /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 3} "vect" } } */
-/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 8} "vect" } } */
 /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 5} "vect" } } */
 /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } */
 
diff --git a/gcc/testsuite/gcc.dg/vect/vect-over-widen-4.c b/gcc/testsuite/gcc.dg/vect/vect-over-widen-4.c
index 3d536d5..0ef377f 100644
--- a/gcc/testsuite/gcc.dg/vect/vect-over-widen-4.c
+++ b/gcc/testsuite/gcc.dg/vect/vect-over-widen-4.c
@@ -66,9 +66,7 @@  int main (void)
 }
 
 /* { dg-final { scan-tree-dump-times "vect_recog_widen_shift_pattern: detected" 2 "vect" { target vect_widen_shift } } } */
-/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 3} "vect" } } */
 /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 3} "vect" } } */
-/* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* << 8} "vect" } } */
 /* { dg-final { scan-tree-dump {vect_recog_over_widening_pattern: detected:[^\n]* >> 5} "vect" } } */
 /* { dg-final { scan-tree-dump-times "vectorized 1 loops" 1 "vect" } } */