Message ID | 65928812-1ff1-7e69-3b1e-7ca62e09cc79@redhat.com |
---|---|
State | New |
Headers |
Return-Path: <gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org> X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 29CBC388A025 for <patchwork@sourceware.org>; Mon, 10 Jan 2022 23:28:04 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 29CBC388A025 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1641857284; bh=diBX14Yhu3WYZk6fGiK1MVRScb16myIwi3aE1hFD1Os=; h=Date:To:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=Sy7WF7V9jL+W14BVHMEvXgLpNDoNxro6zEdHqSzSnvU432zIYc5i9wbFeEbU2fxrF pC7hLxDKN5R9HLdSJTdqgssYmC0n4jXgQ5keeWzMGPy0QRThn5HjFQ/rYDxjEGsP3N Cmo4tkbcK1jsDm5pfoF7cFs+UvoN+frIojGeIQcc= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTPS id 4BEF13858400 for <gcc-patches@gcc.gnu.org>; Mon, 10 Jan 2022 23:27:35 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 4BEF13858400 Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-605-XZ6Nekd3NruDqenH53iK1g-1; Mon, 10 Jan 2022 18:27:33 -0500 X-MC-Unique: XZ6Nekd3NruDqenH53iK1g-1 Received: by mail-qk1-f199.google.com with SMTP id v3-20020a376103000000b004782a1ed001so781553qkb.0 for <gcc-patches@gcc.gnu.org>; Mon, 10 Jan 2022 15:27:33 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:message-id:date:mime-version:user-agent :content-language:to:from:subject; bh=bs00DjrgvZro51hj9CLV0AMaYdU2lBrOpgxZ7DAmGI8=; b=15glcN19AoGys2wKpifrdfjr9Lt5aUTP2FnV5icSKJxkzGFhhwEYkIwllzGiP3RxvI 3hliFZbaKSX331a6aK98DwbzwuFGt6Tt2dUEgc6xutbiH5nFgwuZG9WDPdee/FYPXiNV YjYWLOWHwv2rtb6hURGxfrqWXOxHAzfODy1GraeggGoSH9u2GrR8y58oE6Ehsqloos8R jCzZyZQTxMDCeZOLVQss/y1VAB1u+WWqLEUZA60beO7GFiQP18XjwSMJfQsK7tS8mM0g smDyOCR+iG2S7u19RinXiIa7jpxD1TjjBRb86rA8F94+RxgMKf/VIGkICd1B09yrhS44 CWmA== X-Gm-Message-State: AOAM533OI3sRL6x0BAByW2MhnkBvUhDLoAZbmu5rf3s79WdQF6ynNcgB rqhXYs5oF9Ton1Aux7zWURSEcqCOHrvae4grbZUjB8mrF7B/HwD6IjmXToOejtdKqGNztqRDQAF 258M/bjrIv73r5+dC7V6C8bi7H5WqiACNcDcsngpshJxYvku4N4GugI/m+D+b0TD36hsA1g== X-Received: by 2002:ac8:7d8d:: with SMTP id c13mr1817220qtd.609.1641857252856; Mon, 10 Jan 2022 15:27:32 -0800 (PST) X-Google-Smtp-Source: ABdhPJwLs4BcLZHJS9eQvkmSyIWZ9nYN2yRf6u+v3Zl/8Ix3J3HO98ey+WnbLc7OjlHwl3q+pzRSkA== X-Received: by 2002:ac8:7d8d:: with SMTP id c13mr1817204qtd.609.1641857252518; Mon, 10 Jan 2022 15:27:32 -0800 (PST) Received: from ?IPV6:2607:fea8:a262:5f00:e36d:32da:80f:d4e1? ([2607:fea8:a262:5f00:e36d:32da:80f:d4e1]) by smtp.gmail.com with ESMTPSA id u7sm5882007qta.32.2022.01.10.15.27.31 for <gcc-patches@gcc.gnu.org> (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 10 Jan 2022 15:27:31 -0800 (PST) Message-ID: <65928812-1ff1-7e69-3b1e-7ca62e09cc79@redhat.com> Date: Mon, 10 Jan 2022 18:27:30 -0500 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.4.1 To: gcc-patches <gcc-patches@gcc.gnu.org> Subject: [PATCH] PR tree-optimization/103821 - Prevent exponential range calculations. X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: multipart/mixed; boundary="------------5tye0YbwixjInmtUN1tR524I" Content-Language: en-CA X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list <gcc-patches.gcc.gnu.org> List-Unsubscribe: <https://gcc.gnu.org/mailman/options/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe> List-Archive: <https://gcc.gnu.org/pipermail/gcc-patches/> List-Post: <mailto:gcc-patches@gcc.gnu.org> List-Help: <mailto:gcc-patches-request@gcc.gnu.org?subject=help> List-Subscribe: <https://gcc.gnu.org/mailman/listinfo/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe> From: Andrew MacLeod via Gcc-patches <gcc-patches@gcc.gnu.org> Reply-To: Andrew MacLeod <amacleod@redhat.com> Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" <gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org> |
Series |
PR tree-optimization/103821 - Prevent exponential range calculations.
|
|
Commit Message
Andrew MacLeod
Jan. 10, 2022, 11:27 p.m. UTC
This test case demonstrates an unnoticed exponential situation in range-ops. We end up unrolling the loop, and the pattern of code creates a set of cascading multiplies for which we can precisely evaluate them with sub-ranges. For instance, we calculated : _38 = int [8192, 8192][24576, 24576][40960, 40960][57344, 57344] so _38 has 4 sub-ranges, and then we calculate: _39 = _38 * _38; we do 16 sub-range multiplications and end up with: int [67108864, 67108864][201326592, 201326592][335544320, 335544320][469762048, 469762048][603979776, 603979776][1006632960, 1006632960][1409286144, 1409286144][1677721600, 1677721600][+INF, +INF] This feeds other multiplies (_39 * _39) and progresses rapidly to blow up the number of sub-ranges in subsequent operations. Folding of sub-ranges is an O(n*m) process. We perform the operation on each pair of sub-ranges and union them. Values like _38 * _38 that continue feeding each other quickly become exponential. Then combining that with union (an inherently linear operation over the number of sub-ranges) at each step of the way adds an additional quadratic operation on top of the exponential factor. This patch adjusts the wi_fold routine to recognize when the calculation is moving in an exponential direction, simply produce a summary result instead of a precise one. The attached patch does this if (#LH sub-ranges * #RH sub-ranges > 12)... then it just performs the operation with the lower and upper bound instead. We could choose a different number, but that one seems to keep things under control, and allows us to process up to a 3x4 operation for precision (there is a testcase in the testsuite for this combination gcc.dg/tree-ssa/pr61839_2.c). Longer term, we might want adjust this routine to be slightly smarter than that, but this is a virtually zero-risk solution this late in the release cycle. This also a generalize ~1% speedup in the VRP2 pass across 380 gcc source files, but I'm sure has much more dramatic results at -O3 that this testcase exposes. Bootstraps on x86_64-pc-linux-gnu with no regressions. OK for trunk? Andrew
Comments
On Tue, Jan 11, 2022 at 12:28 AM Andrew MacLeod via Gcc-patches <gcc-patches@gcc.gnu.org> wrote: > > This test case demonstrates an unnoticed exponential situation in range-ops. > > We end up unrolling the loop, and the pattern of code creates a set of > cascading multiplies for which we can precisely evaluate them with > sub-ranges. > > For instance, we calculated : > > _38 = int [8192, 8192][24576, 24576][40960, 40960][57344, 57344] > > so _38 has 4 sub-ranges, and then we calculate: > > _39 = _38 * _38; > > we do 16 sub-range multiplications and end up with: int [67108864, > 67108864][201326592, 201326592][335544320, 335544320][469762048, > 469762048][603979776, 603979776][1006632960, 1006632960][1409286144, > 1409286144][1677721600, 1677721600][+INF, +INF] > > This feeds other multiplies (_39 * _39) and progresses rapidly to blow > up the number of sub-ranges in subsequent operations. > > Folding of sub-ranges is an O(n*m) process. We perform the operation on > each pair of sub-ranges and union them. Values like _38 * _38 that > continue feeding each other quickly become exponential. > > Then combining that with union (an inherently linear operation over the > number of sub-ranges) at each step of the way adds an additional > quadratic operation on top of the exponential factor. > > This patch adjusts the wi_fold routine to recognize when the calculation > is moving in an exponential direction, simply produce a summary result > instead of a precise one. The attached patch does this if (#LH > sub-ranges * #RH sub-ranges > 12)... then it just performs the operation > with the lower and upper bound instead. We could choose a different > number, but that one seems to keep things under control, and allows us > to process up to a 3x4 operation for precision (there is a testcase in > the testsuite for this combination gcc.dg/tree-ssa/pr61839_2.c). > Longer term, we might want adjust this routine to be slightly smarter > than that, but this is a virtually zero-risk solution this late in the > release cycle. I'm not sure we can do smarter in a good way other than maybe having a range helper that reduces a N component range to M components with maintaining as much precision as possible? Like for [1, 1] u [3, 3] u [100, 100] and requesting at most 2 elements merge [1, 1] and [3, 3] and not [100, 100]. That should eventually be doable in O(n log n). > This also a generalize ~1% speedup in the VRP2 pass across 380 gcc > source files, but I'm sure has much more dramatic results at -O3 that > this testcase exposes. > > Bootstraps on x86_64-pc-linux-gnu with no regressions. OK for trunk? OK. Thanks, Richard. > > Andrew
On 1/11/22 02:01, Richard Biener wrote: > On Tue, Jan 11, 2022 at 12:28 AM Andrew MacLeod via Gcc-patches > <gcc-patches@gcc.gnu.org> wrote: >> This test case demonstrates an unnoticed exponential situation in range-ops. >> >> We end up unrolling the loop, and the pattern of code creates a set of >> cascading multiplies for which we can precisely evaluate them with >> sub-ranges. >> >> For instance, we calculated : >> >> _38 = int [8192, 8192][24576, 24576][40960, 40960][57344, 57344] >> >> so _38 has 4 sub-ranges, and then we calculate: >> >> _39 = _38 * _38; >> >> we do 16 sub-range multiplications and end up with: int [67108864, >> 67108864][201326592, 201326592][335544320, 335544320][469762048, >> 469762048][603979776, 603979776][1006632960, 1006632960][1409286144, >> 1409286144][1677721600, 1677721600][+INF, +INF] >> >> This feeds other multiplies (_39 * _39) and progresses rapidly to blow >> up the number of sub-ranges in subsequent operations. >> >> Folding of sub-ranges is an O(n*m) process. We perform the operation on >> each pair of sub-ranges and union them. Values like _38 * _38 that >> continue feeding each other quickly become exponential. >> >> Then combining that with union (an inherently linear operation over the >> number of sub-ranges) at each step of the way adds an additional >> quadratic operation on top of the exponential factor. >> >> This patch adjusts the wi_fold routine to recognize when the calculation >> is moving in an exponential direction, simply produce a summary result >> instead of a precise one. The attached patch does this if (#LH >> sub-ranges * #RH sub-ranges > 12)... then it just performs the operation >> with the lower and upper bound instead. We could choose a different >> number, but that one seems to keep things under control, and allows us >> to process up to a 3x4 operation for precision (there is a testcase in >> the testsuite for this combination gcc.dg/tree-ssa/pr61839_2.c). >> Longer term, we might want adjust this routine to be slightly smarter >> than that, but this is a virtually zero-risk solution this late in the >> release cycle. > I'm not sure we can do smarter in a good way other than maybe having > a range helper that reduces a N component range to M components > with maintaining as much precision as possible? Like for [1, 1] u [3, 3] > u [100, 100] and requesting at most 2 elements merge [1, 1] and [3, 3] > and not [100, 100]. That should eventually be doable in O(n log n). Yeah, similar to my line of thought. It may also be worth considering something similar after we have calculated a range sometimes. if the resulting range has more than N sub-ranges, look to see if it is worthwhile trying to compress it at that point too maybe. Something for the next stage-1 to consider. Andrew
commit d8c5c37d5362bd876118949de76086daba756ace Author: Andrew MacLeod <amacleod@redhat.com> Date: Mon Jan 10 13:33:44 2022 -0500 Prevent exponential range calculations. Produce a summary result for any operation involving too many subranges. PR tree-optimization/103821 * range-op.cc (range_operator::fold_range): Only do precise ranges when there are not too many subranges. range_operator::fold_range diff --git a/gcc/range-op.cc b/gcc/range-op.cc index 1af42ebc376..a4f6e9eba29 100644 --- a/gcc/range-op.cc +++ b/gcc/range-op.cc @@ -209,10 +209,12 @@ range_operator::fold_range (irange &r, tree type, unsigned num_rh = rh.num_pairs (); // If both ranges are single pairs, fold directly into the result range. - if (num_lh == 1 && num_rh == 1) + // If the number of subranges grows too high, produce a summary result as the + // loop becomes exponential with little benefit. See PR 103821. + if ((num_lh == 1 && num_rh == 1) || num_lh * num_rh > 12) { - wi_fold_in_parts (r, type, lh.lower_bound (0), lh.upper_bound (0), - rh.lower_bound (0), rh.upper_bound (0)); + wi_fold_in_parts (r, type, lh.lower_bound (), lh.upper_bound (), + rh.lower_bound (), rh.upper_bound ()); op1_op2_relation_effect (r, type, lh, rh, rel); return true; }