[COMMITTED,range-ops] Update known bitmasks using CCP for all operators.

Message ID 20221111135318.235387-3-aldyh@redhat.com
State Committed
Commit c16c40808331a02947b1ad962e85e1b40e30a707
Headers
Series [COMMITTED,range-ops] Update known bitmasks using CCP for all operators. |

Commit Message

Aldy Hernandez Nov. 11, 2022, 1:53 p.m. UTC
  Use bit-CCP to calculate bitmasks for all integer operators, instead
of the half-assed job we were doing with just a handful of operators.

This sets us up nicely for tracking known-one bitmasks in the next
release, as all we'll have to do is just store them in the irange.

All in all, this series of patches incur a 1.9% penalty to VRP, with
no measurable difference in overall compile time.  The reason is
three-fold:

(a) There's double dispatch going on.  First, the dispatch for the
range-ops virtuals, and now the switch in bit_value_binop.

(b) The maybe nonzero mask is stored as a tree and there is an endless
back and forth with wide-ints.  This will be a non-issue next release,
when we convert irange to wide-ints.

(c) New functionality has a cost.  We were handling 2 cases (plus
casts).  Now we handle 20.

I can play around with moving the bit_value_binop cases into inlined
methods in the different range-op entries, and see if that improves
anything, but I doubt (a) buys us that much.  Certainly something that
can be done in stage3 if it's measurable in any significant way.

p.s It would be nice in the future to teach the op[12]_range methods about
the masks.

gcc/ChangeLog:

	* range-op.cc (range_operator::fold_range): Call
	update_known_bitmask.
	(operator_bitwise_and::fold_range): Avoid setting nonzero bits
	when range is undefined.
---
 gcc/range-op.cc | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)
  

Patch

diff --git a/gcc/range-op.cc b/gcc/range-op.cc
index 00a736e983d..9eec46441a3 100644
--- a/gcc/range-op.cc
+++ b/gcc/range-op.cc
@@ -245,6 +245,7 @@  range_operator::fold_range (irange &r, tree type,
       wi_fold_in_parts (r, type, lh.lower_bound (), lh.upper_bound (),
 			rh.lower_bound (), rh.upper_bound ());
       op1_op2_relation_effect (r, type, lh, rh, rel);
+      update_known_bitmask (r, m_code, lh, rh);
       return true;
     }
 
@@ -262,10 +263,12 @@  range_operator::fold_range (irange &r, tree type,
 	if (r.varying_p ())
 	  {
 	    op1_op2_relation_effect (r, type, lh, rh, rel);
+	    update_known_bitmask (r, m_code, lh, rh);
 	    return true;
 	  }
       }
   op1_op2_relation_effect (r, type, lh, rh, rel);
+  update_known_bitmask (r, m_code, lh, rh);
   return true;
 }
 
@@ -2873,7 +2876,7 @@  operator_bitwise_and::fold_range (irange &r, tree type,
 {
   if (range_operator::fold_range (r, type, lh, rh))
     {
-      if (!lh.undefined_p () && !rh.undefined_p ())
+      if (!r.undefined_p () && !lh.undefined_p () && !rh.undefined_p ())
 	r.set_nonzero_bits (wi::bit_and (lh.get_nonzero_bits (),
 					 rh.get_nonzero_bits ()));
       return true;