From patchwork Fri May 13 13:48:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Andrew MacLeod X-Patchwork-Id: 53946 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id C46A6396D83E for ; Fri, 13 May 2022 13:50:05 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org C46A6396D83E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1652449805; bh=t26oxT5qO/d8VvPl5cpNcRYFTMXcz37mxtBRzyMz28E=; h=Date:To:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=FLzVvIGnkAcTqwJ+ONQXviTtggGMGqrTJ7XJL0Fg+kgsgbTxPVbJXw9PVtUQH5Fb4 8js4zz+ElnJ6v/W4Wpc3u/xtOWNOSUhFQ1Lm0/Ihg89zxbeDaEIKOJOsdlp3Ou6w1v Ge9PBSgRFzP7vXOqsQ5D9BYhrpTqUlaCMd7Z7A3A= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTPS id 488DA396D825 for ; Fri, 13 May 2022 13:48:16 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 488DA396D825 Received: from mail-qt1-f197.google.com (mail-qt1-f197.google.com [209.85.160.197]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-609-UvDOibXUOR2X4U06cddDtw-1; Fri, 13 May 2022 09:48:14 -0400 X-MC-Unique: UvDOibXUOR2X4U06cddDtw-1 Received: by mail-qt1-f197.google.com with SMTP id d4-20020a05622a15c400b002f3bd4b80f7so6361522qty.3 for ; Fri, 13 May 2022 06:48:14 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent :content-language:to:cc:from:subject; bh=OwoN+ZjRBUMkc9DA52/ci06La7RbR3ECMW/OMcSfts8=; b=sYk2k63xHCCkqBDSxK+d+5zmCuf2eUV7wSJ/1Gdu8bY1f3khw74D9YBbwbyUp3YF9q MUvEfEi0vRTU3pWEv3N/c5W1NzI0361fPoe6cBwnTB4mRvqBkDcHEcnUANy4Ink8HfvO K6J3m9uzj1ONkToib+kHjVyF5vQ8JuEgyLluyy2oy1hN8TSjt1dwyppcqs2L0h0pdh7G CtYpvfwT3Q0IMfl5yXV/XYlUGLELmMASd+BeOXBJd8gJiYsnNyQw6wyZqCV4TAb28O8U CKNNRozEAZCEe1AWmdPmgeqcMM9fiJDCivFPlADBnHIvQ/3m2HR2SluQJXI8NavjZ1QZ rs4w== X-Gm-Message-State: AOAM533sHNrceJ8u9im+6YZQa3A1Oir/anAU7HMdwaC3kOwXPVzJteXq pG5AacKyeFuAqIKZ1Ic0t/u3/O8k1tgNDcLrTgB5LAiPDt36St4sy2FZfo7+2eRuVXUGJ0NDzc1 tcL86aXMmJ3n6iKe2is5OAp/EnZjfMH4eWospU8GiJjEV0mcYUCICeIWpAfvT3GduWQaJCg== X-Received: by 2002:a05:622a:6184:b0:2f1:e213:9c7 with SMTP id hh4-20020a05622a618400b002f1e21309c7mr4517627qtb.467.1652449693699; Fri, 13 May 2022 06:48:13 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzNTqNccfXAEmkgwi8OKxI3mxGpzMPZL/YqtcA944TMGr36llFcVNUJ6wUcY0fDKcowOUQzyQ== X-Received: by 2002:a05:622a:6184:b0:2f1:e213:9c7 with SMTP id hh4-20020a05622a618400b002f1e21309c7mr4517599qtb.467.1652449693382; Fri, 13 May 2022 06:48:13 -0700 (PDT) Received: from ?IPV6:2607:fea8:a261:5e00::94b0? ([2607:fea8:a261:5e00::94b0]) by smtp.gmail.com with ESMTPSA id w7-20020ac87187000000b002f39b99f685sm1502583qto.31.2022.05.13.06.48.12 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Fri, 13 May 2022 06:48:12 -0700 (PDT) Message-ID: <5f41caee-47cd-2599-b5b7-63e9d459c730@redhat.com> Date: Fri, 13 May 2022 09:48:11 -0400 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.9.0 To: gcc-patches Subject: [COMMITTED] Make range_from_dom more effective. X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Language: en-CA X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Andrew MacLeod via Gcc-patches From: Andrew MacLeod Reply-To: Andrew MacLeod Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" This patch enhances  how ranger's cache picks up ranges from dominators. The previous version was pretty basic, simply walking the DOM tree back until it ran into a "complicated" situation.. (for instance when the dominator has multiple successors)  then falling back to the non-dominator based propagator. This patch makes it re-entrant, and allows it to deal with multiple successors and more precisely updates the cache for future queries. It also handles other situations which enable it to always resolve a query. It also adds query modes which enable the cache to more precisely control when updates happen making it more usable in read-only contexts. Bootstraps on x86_64-pc-linux-gnu with no regression. Pushed. Andrew commit 6b156044c12bc4582511fe270b10450c943476dd Author: Andrew MacLeod Date: Thu Mar 24 15:28:43 2022 -0400 Make range_from_dom more effective. Add modes to range_from_dom such that we can simply query, or adjust the cache and deal with multiple predecessor blocks. * gimple-range-cache.cc (ranger_cache::ranger_cache): Start with worlist truncated. (ranger_cache::entry_range): Add rfd_mode parameter. (ranger_cache::exit_range): Ditto. (ranger_cache::edge_range): New. Incorporate from range_on_edge. (ranger_cache::range_of_expr): Adjust call to entry_range. (ranger_cache::range_on_edge): Split to edge_range and call. (ranger_cache::fill_block_cache): Always invoke range_from_dom. (ranger_cache::range_from_dom): Make reentrant, add search mode, handle mutiple predecessors. (ranger_cache::update_to_nonnull): Adjust call to exit_range. * gimple-range-cache.h (ranger_cache): Add enum rfd_mode. Adjust prototypes. diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc index 421ea1a20ef..bdb30460345 100644 --- a/gcc/gimple-range-cache.cc +++ b/gcc/gimple-range-cache.cc @@ -864,6 +864,7 @@ ranger_cache::ranger_cache (int not_executable_flag) { m_workback.create (0); m_workback.safe_grow_cleared (last_basic_block_for_fn (cfun)); + m_workback.truncate (0); m_temporal = new temporal_cache; // If DOM info is available, spawn an oracle as well. if (dom_info_available_p (CDI_DOMINATORS)) @@ -1008,10 +1009,12 @@ ranger_cache::range_of_def (irange &r, tree name, basic_block bb) } } -// Get the range of NAME as it occurs on entry to block BB. +// Get the range of NAME as it occurs on entry to block BB. Use MODE for +// lookups. void -ranger_cache::entry_range (irange &r, tree name, basic_block bb) +ranger_cache::entry_range (irange &r, tree name, basic_block bb, + enum rfd_mode mode) { if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun)) { @@ -1022,13 +1025,16 @@ ranger_cache::entry_range (irange &r, tree name, basic_block bb) // Look for the on-entry value of name in BB from the cache. // Otherwise pick up the best available global value. if (!m_on_entry.get_bb_range (r, name, bb)) - range_of_def (r, name); + if (!range_from_dom (r, name, bb, mode)) + range_of_def (r, name); } -// Get the range of NAME as it occurs on exit from block BB. +// Get the range of NAME as it occurs on exit from block BB. Use MODE for +// lookups. void -ranger_cache::exit_range (irange &r, tree name, basic_block bb) +ranger_cache::exit_range (irange &r, tree name, basic_block bb, + enum rfd_mode mode) { if (bb == ENTRY_BLOCK_PTR_FOR_FN (cfun)) { @@ -1041,8 +1047,25 @@ ranger_cache::exit_range (irange &r, tree name, basic_block bb) if (def_bb == bb) range_of_def (r, name, bb); else - entry_range (r, name, bb); - } + entry_range (r, name, bb, mode); +} + +// Get the range of NAME on edge E using MODE, return the result in R. +// Always returns a range and true. + +bool +ranger_cache::edge_range (irange &r, edge e, tree name, enum rfd_mode mode) +{ + exit_range (r, name, e->src, mode); + // If this is not an abnormal edge, check for a non-null exit. + if ((e->flags & (EDGE_EH | EDGE_ABNORMAL)) == 0) + m_non_null.adjust_range (r, name, e->src, false); + int_range_max er; + if (m_gori.outgoing_edge_range_p (er, e, name, *this)) + r.intersect (er); + return true; +} + // Implement range_of_expr. @@ -1063,32 +1086,22 @@ ranger_cache::range_of_expr (irange &r, tree name, gimple *stmt) if (bb == def_bb) range_of_def (r, name, bb); else - entry_range (r, name, bb); + entry_range (r, name, bb, RFD_NONE); return true; } -// Implement range_on_edge. Always return the best available range. - - bool - ranger_cache::range_on_edge (irange &r, edge e, tree expr) - { - if (gimple_range_ssa_p (expr)) - { - exit_range (r, expr, e->src); - // If this is not an abnormal edge, check for a non-null exit. - if ((e->flags & (EDGE_EH | EDGE_ABNORMAL)) == 0) - m_non_null.adjust_range (r, expr, e->src, false); - int_range_max edge_range; - if (m_gori.outgoing_edge_range_p (edge_range, e, expr, *this)) - r.intersect (edge_range); - return true; - } +// Implement range_on_edge. Always return the best available range using +// the current cache values. +bool +ranger_cache::range_on_edge (irange &r, edge e, tree expr) +{ + if (gimple_range_ssa_p (expr)) + return edge_range (r, e, expr, RFD_NONE); return get_tree_range (r, expr, NULL); } - // Return a static range for NAME on entry to basic block BB in R. If // calc is true, fill any cache entries required between BB and the // def block for NAME. Otherwise, return false if the cache is empty. @@ -1281,20 +1294,12 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb) // At this point we shouldn't be looking at the def, entry or exit block. gcc_checking_assert (bb != def_bb && bb != ENTRY_BLOCK_PTR_FOR_FN (cfun) && bb != EXIT_BLOCK_PTR_FOR_FN (cfun)); + gcc_checking_assert (m_workback.length () == 0); // If the block cache is set, then we've already visited this block. if (m_on_entry.bb_range_p (name, bb)) return; - // Visit each block back to the DEF. Initialize each one to UNDEFINED. - // m_visited at the end will contain all the blocks that we needed to set - // the range_on_entry cache for. - m_workback.truncate (0); - m_workback.quick_push (bb); - undefined.set_undefined (); - m_on_entry.set_bb_range (name, bb, undefined); - gcc_checking_assert (m_update->empty_p ()); - if (DEBUG_RANGE_CACHE) { fprintf (dump_file, "\n"); @@ -1302,9 +1307,8 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb) fprintf (dump_file, " : "); } - // If there are dominators, check if a dominators can supply the range. - if (dom_info_available_p (CDI_DOMINATORS) - && range_from_dom (block_result, name, bb)) + // Check if a dominators can supply the range. + if (range_from_dom (block_result, name, bb, RFD_FILL)) { m_on_entry.set_bb_range (name, bb, block_result); if (DEBUG_RANGE_CACHE) @@ -1313,9 +1317,18 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb) block_result.dump (dump_file); fprintf (dump_file, "\n"); } + gcc_checking_assert (m_workback.length () == 0); return; } + // Visit each block back to the DEF. Initialize each one to UNDEFINED. + // m_visited at the end will contain all the blocks that we needed to set + // the range_on_entry cache for. + m_workback.quick_push (bb); + undefined.set_undefined (); + m_on_entry.set_bb_range (name, bb, undefined); + gcc_checking_assert (m_update->empty_p ()); + while (m_workback.length () > 0) { basic_block node = m_workback.pop (); @@ -1399,12 +1412,14 @@ ranger_cache::fill_block_cache (tree name, basic_block bb, basic_block def_bb) } -// Get the range of NAME from dominators of BB and return it in R. +// Get the range of NAME from dominators of BB and return it in R. Search the +// dominator tree based on MODE. bool -ranger_cache::range_from_dom (irange &r, tree name, basic_block start_bb) +ranger_cache::range_from_dom (irange &r, tree name, basic_block start_bb, + enum rfd_mode mode) { - if (!dom_info_available_p (CDI_DOMINATORS)) + if (mode == RFD_NONE || !dom_info_available_p (CDI_DOMINATORS)) return false; // Search back to the definition block or entry block. @@ -1419,7 +1434,7 @@ ranger_cache::range_from_dom (irange &r, tree name, basic_block start_bb) // Range on entry to the DEF block should not be queried. gcc_checking_assert (start_bb != def_bb); - m_workback.truncate (0); + unsigned start_limit = m_workback.length (); // Default value is global range. get_global_range (r, name); @@ -1436,10 +1451,36 @@ ranger_cache::range_from_dom (irange &r, tree name, basic_block start_bb) if (m_gori.has_edge_range_p (name, bb)) { // Only outgoing ranges to single_pred blocks are dominated by - // outgoing edge ranges, so only those need to be considered. + // outgoing edge ranges, so those can be simply adjusted on the fly. edge e = find_edge (bb, prev_bb); if (e && single_pred_p (prev_bb)) m_workback.quick_push (prev_bb); + else if (mode == RFD_FILL) + { + // Multiple incoming edges, so recursively satisfy this block, + // store the range, then calculate the incoming range for PREV_BB. + if (def_bb != bb) + { + range_from_dom (r, name, bb, RFD_FILL); + // If the range can't be store, don't try to accumulate + // the range in PREV_BB due to excessive recalculations. + if (!m_on_entry.set_bb_range (name, bb, r)) + break; + } + // With the dominator set, we should be able to cheaply query + // each incoming edge now and accumulate the results. + r.set_undefined (); + edge_iterator ei; + int_range_max er; + FOR_EACH_EDGE (e, ei, prev_bb->preds) + { + edge_range (er, e, name, RFD_READ_ONLY); + r.union_ (er); + } + // Set the cache in PREV_BB so it is not calculated again. + m_on_entry.set_bb_range (name, prev_bb, r); + break; + } } if (def_bb == bb) @@ -1460,16 +1501,16 @@ ranger_cache::range_from_dom (irange &r, tree name, basic_block start_bb) } // Now process any outgoing edges that we seen along the way. - while (m_workback.length () > 0) + while (m_workback.length () > start_limit) { - int_range_max edge_range; + int_range_max er; prev_bb = m_workback.pop (); edge e = single_pred_edge (prev_bb); bb = e->src; - if (m_gori.outgoing_edge_range_p (edge_range, e, name, *this)) + if (m_gori.outgoing_edge_range_p (er, e, name, *this)) { - r.intersect (edge_range); + r.intersect (er); if (r.varying_p () && ((e->flags & (EDGE_EH | EDGE_ABNORMAL)) == 0)) { if (m_non_null.non_null_deref_p (name, bb, false)) @@ -1518,7 +1559,7 @@ ranger_cache::update_to_nonnull (basic_block bb, tree name) // Update the on-entry cache for BB to be non-zero. Note this can set // the on entry value in the DEF block, which can override the def. int_range_max r; - exit_range (r, name, bb); + exit_range (r, name, bb, RFD_READ_ONLY); if (r.varying_p ()) { r.set_nonzero (type); diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h index a0244e4f6a4..560403b534e 100644 --- a/gcc/gimple-range-cache.h +++ b/gcc/gimple-range-cache.h @@ -129,7 +129,6 @@ public: virtual bool range_of_expr (irange &r, tree name, gimple *stmt); virtual bool range_on_edge (irange &r, edge e, tree expr); bool block_range (irange &r, basic_block bb, tree name, bool calc = true); - bool range_from_dom (irange &r, tree name, basic_block bb); bool get_global_range (irange &r, tree name) const; bool get_global_range (irange &r, tree name, bool ¤t_p); @@ -151,9 +150,17 @@ private: void fill_block_cache (tree name, basic_block bb, basic_block def_bb); void propagate_cache (tree name); + enum rfd_mode + { + RFD_NONE, // Only look at current block cache. + RFD_READ_ONLY, // Scan DOM tree, do not write to cache. + RFD_FILL // Scan DOM tree, updating important nodes. + }; + bool range_from_dom (irange &r, tree name, basic_block bb, enum rfd_mode); void range_of_def (irange &r, tree name, basic_block bb = NULL); - void entry_range (irange &r, tree expr, basic_block bb); - void exit_range (irange &r, tree expr, basic_block bb); + void entry_range (irange &r, tree expr, basic_block bb, enum rfd_mode); + void exit_range (irange &r, tree expr, basic_block bb, enum rfd_mode); + bool edge_range (irange &r, edge e, tree name, enum rfd_mode); vec m_workback; class update_list *m_update;