[committed] range-cache: Fix ranger ICE if number of bbs increases [PR116899]
Checks
Commit Message
Hi!
Ranger cache during initialization reserves number of basic block slots
in the m_workback vector (in an inefficient way by doing create (0)
and safe_grow_cleared (n) and truncate (0) rather than say create (n)
or reserve (n) after create). The problem is that some passes split bbs and/or
create new basic blocks and so when one is unlucky, the quick_push into that
vector can ICE.
The following patch replaces those 4 quick_push calls with safe_push.
I've also gathered some statistics from compiling 63 gcc sources (picked those
that dependent on gimple-range-cache.h just because I had to rebuild them once
for the instrumentation), and that showed that in 81% of cases nothing has
been pushed into the vector at all (and note, not everything was small, there
were even cases with 10000+ basic blocks), another 18.5% of cases had just 1-4
elements in the vector at most, 0.08% had 5-8 and 19 out of 305386 cases
had at most 9-11 elements, nothing more. So, IMHO reserving number of basic
block in the vector is a waste of compile time memory and with the safe_push
calls, the patch just initializes the vector to vNULL.
Bootstrapped/regtested on x86_64-linux and i686-linux, preapproved in the PR
by Andrew, committed to trunk.
2024-10-01 Jakub Jelinek <jakub@redhat.com>
PR middle-end/116899
* gimple-range-cache.cc (ranger_cache::ranger_cache): Set m_workback
to vNULL instead of creating it, growing and then truncating.
(ranger_cache::fill_block_cache): Use safe_push rather than quick_push
on m_workback.
(ranger_cache::range_from_dom): Likewise.
* gcc.dg/bitint-111.c: New test.
Jakub
@@ -997,9 +997,7 @@ update_list::pop ()
ranger_cache::ranger_cache (int not_executable_flag, bool use_imm_uses)
{
- m_workback.create (0);
- m_workback.safe_grow_cleared (last_basic_block_for_fn (cfun));
- m_workback.truncate (0);
+ m_workback = vNULL;
m_temporal = new temporal_cache;
// If DOM info is available, spawn an oracle as well.
@@ -1560,7 +1558,7 @@ ranger_cache::fill_block_cache (tree nam
// Visit each block back to the DEF. Initialize each one to UNDEFINED.
// m_visited at the end will contain all the blocks that we needed to set
// the range_on_entry cache for.
- m_workback.quick_push (bb);
+ m_workback.safe_push (bb);
undefined.set_undefined ();
m_on_entry.set_bb_range (name, bb, undefined);
gcc_checking_assert (m_update->empty_p ());
@@ -1634,7 +1632,7 @@ ranger_cache::fill_block_cache (tree nam
// the list.
gcc_checking_assert (!m_on_entry.bb_range_p (name, pred));
m_on_entry.set_bb_range (name, pred, undefined);
- m_workback.quick_push (pred);
+ m_workback.safe_push (pred);
}
}
@@ -1729,7 +1727,7 @@ ranger_cache::range_from_dom (vrange &r,
// This block has an outgoing range.
if (gori ().has_edge_range_p (name, bb))
- m_workback.quick_push (prev_bb);
+ m_workback.safe_push (prev_bb);
else
{
// Normally join blocks don't carry any new range information on
@@ -1753,7 +1751,7 @@ ranger_cache::range_from_dom (vrange &r,
break;
}
if (all_dom)
- m_workback.quick_push (prev_bb);
+ m_workback.safe_push (prev_bb);
}
}
@@ -0,0 +1,16 @@
+/* PR middle-end/116899 */
+/* { dg-do compile { target bitint575 } } */
+/* { dg-options "-O2" } */
+
+float f;
+_BitInt(255) b;
+
+void
+foo (signed char c)
+{
+ for (;;)
+ {
+ c %= (unsigned _BitInt(512)) 0; /* { dg-warning "division by zero" } */
+ f /= b >= c;
+ }
+}