Message ID | 20211125105459.2084694-1-aldyh@redhat.com |
---|---|
State | Committed |
Commit | 8acbd7bef6edbf537e3037174907029b530212f6 |
Headers |
Return-Path: <gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org> X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B24B13858403 for <patchwork@sourceware.org>; Thu, 25 Nov 2021 10:55:39 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org B24B13858403 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1637837739; bh=+nEKPQoP/8mFqpMzeyhzlwVphv2kWKHSHEzoMym0wNc=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=PvOsbDFtzYVea+ylwv/uAl1SCFiQFa2R3uP7ZIKrs6aCOHhZBesPnZ837QjPZh84w 31xtPN2mG8ZZAcXYFkBEtd5Ht3CyQb9d6VqrJA7Q4sGpJvZH8WzAiwv3J/xo1kEKkJ CVy3r/j6lAMWf3X4BnSQKsm6IfVHnJHOVVyWcDy0= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTPS id 572DE3858D28 for <gcc-patches@gcc.gnu.org>; Thu, 25 Nov 2021 10:55:10 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 572DE3858D28 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-410-kUXMVknxNp-9cXW_0oChEg-1; Thu, 25 Nov 2021 05:55:08 -0500 X-MC-Unique: kUXMVknxNp-9cXW_0oChEg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4983D1006AA0 for <gcc-patches@gcc.gnu.org>; Thu, 25 Nov 2021 10:55:07 +0000 (UTC) Received: from abulafia.quesejoda.com (unknown [10.39.192.140]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 84D972AF6D; Thu, 25 Nov 2021 10:55:05 +0000 (UTC) Received: from abulafia.quesejoda.com (localhost [127.0.0.1]) by abulafia.quesejoda.com (8.16.1/8.15.2) with ESMTPS id 1APAt1mu2084746 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Thu, 25 Nov 2021 11:55:02 +0100 Received: (from aldyh@localhost) by abulafia.quesejoda.com (8.16.1/8.16.1/Submit) id 1APAt1ha2084745; Thu, 25 Nov 2021 11:55:01 +0100 To: GCC patches <gcc-patches@gcc.gnu.org> Subject: [COMMITTED] path solver: Compute ranges in path in gimple order. Date: Thu, 25 Nov 2021 11:54:58 +0100 Message-Id: <20211125105459.2084694-1-aldyh@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit Content-Type: text/plain; charset="US-ASCII" X-Spam-Status: No, score=-13.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list <gcc-patches.gcc.gnu.org> List-Unsubscribe: <https://gcc.gnu.org/mailman/options/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe> List-Archive: <https://gcc.gnu.org/pipermail/gcc-patches/> List-Post: <mailto:gcc-patches@gcc.gnu.org> List-Help: <mailto:gcc-patches-request@gcc.gnu.org?subject=help> List-Subscribe: <https://gcc.gnu.org/mailman/listinfo/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe> From: Aldy Hernandez via Gcc-patches <gcc-patches@gcc.gnu.org> Reply-To: Aldy Hernandez <aldyh@redhat.com> Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" <gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org> |
Series |
[COMMITTED] path solver: Compute ranges in path in gimple order.
|
|
Commit Message
Aldy Hernandez
Nov. 25, 2021, 10:54 a.m. UTC
Andrew's patch for this PR103254 papered over some underlying performance issues in the path solver that I'd like to address. We are currently solving the SSA's defined in the current block in bitmap order, which amounts to random order for all purposes. This is causing unnecessary recursion in gori. This patch changes the order to gimple order, thus solving dependencies before uses. There is no change in threadable paths with this change. Tested on x86-64 & ppc64le Linux. gcc/ChangeLog: PR tree-optimization/103254 * gimple-range-path.cc (path_range_query::compute_ranges_defined): New (path_range_query::compute_ranges_in_block): Move to compute_ranges_defined. * gimple-range-path.h (compute_ranges_defined): New. --- gcc/gimple-range-path.cc | 33 ++++++++++++++++++++++----------- gcc/gimple-range-path.h | 1 + 2 files changed, 23 insertions(+), 11 deletions(-)
Comments
On Thu, Nov 25, 2021 at 11:55 AM Aldy Hernandez via Gcc-patches <gcc-patches@gcc.gnu.org> wrote: > > Andrew's patch for this PR103254 papered over some underlying > performance issues in the path solver that I'd like to address. > > We are currently solving the SSA's defined in the current block in > bitmap order, which amounts to random order for all purposes. This is > causing unnecessary recursion in gori. This patch changes the order > to gimple order, thus solving dependencies before uses. > > There is no change in threadable paths with this change. > > Tested on x86-64 & ppc64le Linux. > > gcc/ChangeLog: > > PR tree-optimization/103254 > * gimple-range-path.cc (path_range_query::compute_ranges_defined): New > (path_range_query::compute_ranges_in_block): Move to > compute_ranges_defined. > * gimple-range-path.h (compute_ranges_defined): New. > --- > gcc/gimple-range-path.cc | 33 ++++++++++++++++++++++----------- > gcc/gimple-range-path.h | 1 + > 2 files changed, 23 insertions(+), 11 deletions(-) > > diff --git a/gcc/gimple-range-path.cc b/gcc/gimple-range-path.cc > index 4aa666d2c8b..e24086691c4 100644 > --- a/gcc/gimple-range-path.cc > +++ b/gcc/gimple-range-path.cc > @@ -401,6 +401,27 @@ path_range_query::compute_ranges_in_phis (basic_block bb) > } > } > > +// Compute ranges defined in block. > + > +void > +path_range_query::compute_ranges_defined (basic_block bb) > +{ > + int_range_max r; > + > + compute_ranges_in_phis (bb); > + > + // Iterate in gimple order to minimize recursion. > + for (auto gsi = gsi_start_nondebug_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) gsi_next_nondebug (&gsi)? Of course this all has the extra cost of iterating over a possibly very large BB for just a few bits in m_imports? How often does m_imports have exactly one bit set? > + if (gimple_has_lhs (gsi_stmt (gsi))) > + { > + tree name = gimple_get_lhs (gsi_stmt (gsi)); > + if (TREE_CODE (name) == SSA_NAME > + && bitmap_bit_p (m_imports, SSA_NAME_VERSION (name)) > + && range_defined_in_block (r, name, bb)) > + set_cache (r, name); > + } So if you ever handle SSA DEFs in asms then this will not pick them. I think more generic would be to do FOR_EACH_SSA_DEF_OPERAND (..., SSA_OP_DEF) > +} > + > // Compute ranges defined in the current block, or exported to the > // next block. > > @@ -423,17 +444,7 @@ path_range_query::compute_ranges_in_block (basic_block bb) > clear_cache (name); > } > > - // Solve imports defined in this block, starting with the PHIs... > - compute_ranges_in_phis (bb); > - // ...and then the rest of the imports. > - EXECUTE_IF_SET_IN_BITMAP (m_imports, 0, i, bi) > - { > - tree name = ssa_name (i); > - > - if (gimple_code (SSA_NAME_DEF_STMT (name)) != GIMPLE_PHI > - && range_defined_in_block (r, name, bb)) > - set_cache (r, name); > - } > + compute_ranges_defined (bb); > > if (at_exit ()) > return; > diff --git a/gcc/gimple-range-path.h b/gcc/gimple-range-path.h > index 57a9ae9bdcd..81c87d475dd 100644 > --- a/gcc/gimple-range-path.h > +++ b/gcc/gimple-range-path.h > @@ -58,6 +58,7 @@ private: > // Methods to compute ranges for the given path. > bool range_defined_in_block (irange &, tree name, basic_block bb); > void compute_ranges_in_block (basic_block bb); > + void compute_ranges_defined (basic_block bb); > void compute_ranges_in_phis (basic_block bb); > void adjust_for_non_null_uses (basic_block bb); > void ssa_range_in_phi (irange &r, gphi *phi); > -- > 2.31.1 >
On Thu, Nov 25, 2021 at 12:57 PM Richard Biener <richard.guenther@gmail.com> wrote: > > On Thu, Nov 25, 2021 at 11:55 AM Aldy Hernandez via Gcc-patches > <gcc-patches@gcc.gnu.org> wrote: > > > > Andrew's patch for this PR103254 papered over some underlying > > performance issues in the path solver that I'd like to address. > > > > We are currently solving the SSA's defined in the current block in > > bitmap order, which amounts to random order for all purposes. This is > > causing unnecessary recursion in gori. This patch changes the order > > to gimple order, thus solving dependencies before uses. > > > > There is no change in threadable paths with this change. > > > > Tested on x86-64 & ppc64le Linux. > > > > gcc/ChangeLog: > > > > PR tree-optimization/103254 > > * gimple-range-path.cc (path_range_query::compute_ranges_defined): New > > (path_range_query::compute_ranges_in_block): Move to > > compute_ranges_defined. > > * gimple-range-path.h (compute_ranges_defined): New. > > --- > > gcc/gimple-range-path.cc | 33 ++++++++++++++++++++++----------- > > gcc/gimple-range-path.h | 1 + > > 2 files changed, 23 insertions(+), 11 deletions(-) > > > > diff --git a/gcc/gimple-range-path.cc b/gcc/gimple-range-path.cc > > index 4aa666d2c8b..e24086691c4 100644 > > --- a/gcc/gimple-range-path.cc > > +++ b/gcc/gimple-range-path.cc > > @@ -401,6 +401,27 @@ path_range_query::compute_ranges_in_phis (basic_block bb) > > } > > } > > > > +// Compute ranges defined in block. > > + > > +void > > +path_range_query::compute_ranges_defined (basic_block bb) > > +{ > > + int_range_max r; > > + > > + compute_ranges_in_phis (bb); > > + > > + // Iterate in gimple order to minimize recursion. > > + for (auto gsi = gsi_start_nondebug_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) > > gsi_next_nondebug (&gsi)? > > Of course this all has the extra cost of iterating over a possibly very large > BB for just a few bits in m_imports? How often does m_imports have > exactly one bit set? Hmmm, good point. Perhaps this isn't worth it then. I mean, the underlying bug I'm tackling is an excess of outgoing edge ranges, not the excess recursion this patch attacks. If you think the cost would be high for large ILs, I can revert the patch. Aldy
On Thu, Nov 25, 2021 at 1:10 PM Aldy Hernandez <aldyh@redhat.com> wrote: > > On Thu, Nov 25, 2021 at 12:57 PM Richard Biener > <richard.guenther@gmail.com> wrote: > > > > On Thu, Nov 25, 2021 at 11:55 AM Aldy Hernandez via Gcc-patches > > <gcc-patches@gcc.gnu.org> wrote: > > > > > > Andrew's patch for this PR103254 papered over some underlying > > > performance issues in the path solver that I'd like to address. > > > > > > We are currently solving the SSA's defined in the current block in > > > bitmap order, which amounts to random order for all purposes. This is > > > causing unnecessary recursion in gori. This patch changes the order > > > to gimple order, thus solving dependencies before uses. > > > > > > There is no change in threadable paths with this change. > > > > > > Tested on x86-64 & ppc64le Linux. > > > > > > gcc/ChangeLog: > > > > > > PR tree-optimization/103254 > > > * gimple-range-path.cc (path_range_query::compute_ranges_defined): New > > > (path_range_query::compute_ranges_in_block): Move to > > > compute_ranges_defined. > > > * gimple-range-path.h (compute_ranges_defined): New. > > > --- > > > gcc/gimple-range-path.cc | 33 ++++++++++++++++++++++----------- > > > gcc/gimple-range-path.h | 1 + > > > 2 files changed, 23 insertions(+), 11 deletions(-) > > > > > > diff --git a/gcc/gimple-range-path.cc b/gcc/gimple-range-path.cc > > > index 4aa666d2c8b..e24086691c4 100644 > > > --- a/gcc/gimple-range-path.cc > > > +++ b/gcc/gimple-range-path.cc > > > @@ -401,6 +401,27 @@ path_range_query::compute_ranges_in_phis (basic_block bb) > > > } > > > } > > > > > > +// Compute ranges defined in block. > > > + > > > +void > > > +path_range_query::compute_ranges_defined (basic_block bb) > > > +{ > > > + int_range_max r; > > > + > > > + compute_ranges_in_phis (bb); > > > + > > > + // Iterate in gimple order to minimize recursion. > > > + for (auto gsi = gsi_start_nondebug_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) > > > > gsi_next_nondebug (&gsi)? > > > > Of course this all has the extra cost of iterating over a possibly very large > > BB for just a few bits in m_imports? How often does m_imports have > > exactly one bit set? > > Hmmm, good point. > > Perhaps this isn't worth it then. I mean, the underlying bug I'm > tackling is an excess of outgoing edge ranges, not the excess > recursion this patch attacks. > > If you think the cost would be high for large ILs, I can revert the patch. I think so. If ordering is important then that should be achieved in some other ways (always a bit difficult for on-demand infrastructure). Richard. > > Aldy >
On Thu, Nov 25, 2021 at 1:38 PM Richard Biener <richard.guenther@gmail.com> wrote: > > On Thu, Nov 25, 2021 at 1:10 PM Aldy Hernandez <aldyh@redhat.com> wrote: > > > > On Thu, Nov 25, 2021 at 12:57 PM Richard Biener > > <richard.guenther@gmail.com> wrote: > > > > > > On Thu, Nov 25, 2021 at 11:55 AM Aldy Hernandez via Gcc-patches > > > <gcc-patches@gcc.gnu.org> wrote: > > > > > > > > Andrew's patch for this PR103254 papered over some underlying > > > > performance issues in the path solver that I'd like to address. > > > > > > > > We are currently solving the SSA's defined in the current block in > > > > bitmap order, which amounts to random order for all purposes. This is > > > > causing unnecessary recursion in gori. This patch changes the order > > > > to gimple order, thus solving dependencies before uses. > > > > > > > > There is no change in threadable paths with this change. > > > > > > > > Tested on x86-64 & ppc64le Linux. > > > > > > > > gcc/ChangeLog: > > > > > > > > PR tree-optimization/103254 > > > > * gimple-range-path.cc (path_range_query::compute_ranges_defined): New > > > > (path_range_query::compute_ranges_in_block): Move to > > > > compute_ranges_defined. > > > > * gimple-range-path.h (compute_ranges_defined): New. > > > > --- > > > > gcc/gimple-range-path.cc | 33 ++++++++++++++++++++++----------- > > > > gcc/gimple-range-path.h | 1 + > > > > 2 files changed, 23 insertions(+), 11 deletions(-) > > > > > > > > diff --git a/gcc/gimple-range-path.cc b/gcc/gimple-range-path.cc > > > > index 4aa666d2c8b..e24086691c4 100644 > > > > --- a/gcc/gimple-range-path.cc > > > > +++ b/gcc/gimple-range-path.cc > > > > @@ -401,6 +401,27 @@ path_range_query::compute_ranges_in_phis (basic_block bb) > > > > } > > > > } > > > > > > > > +// Compute ranges defined in block. > > > > + > > > > +void > > > > +path_range_query::compute_ranges_defined (basic_block bb) > > > > +{ > > > > + int_range_max r; > > > > + > > > > + compute_ranges_in_phis (bb); > > > > + > > > > + // Iterate in gimple order to minimize recursion. > > > > + for (auto gsi = gsi_start_nondebug_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) > > > > > > gsi_next_nondebug (&gsi)? > > > > > > Of course this all has the extra cost of iterating over a possibly very large > > > BB for just a few bits in m_imports? How often does m_imports have > > > exactly one bit set? > > > > Hmmm, good point. > > > > Perhaps this isn't worth it then. I mean, the underlying bug I'm > > tackling is an excess of outgoing edge ranges, not the excess > > recursion this patch attacks. > > > > If you think the cost would be high for large ILs, I can revert the patch. > > I think so. If ordering is important then that should be achieved in some > other ways (always a bit difficult for on-demand infrastructure). Nah, this isn't a correctness issue. It's not worth it. I will revert the patch. Thanks. Aldy
Pushed. Sorry for the noise. On Thu, Nov 25, 2021 at 1:51 PM Aldy Hernandez <aldyh@redhat.com> wrote: > > On Thu, Nov 25, 2021 at 1:38 PM Richard Biener > <richard.guenther@gmail.com> wrote: > > > > On Thu, Nov 25, 2021 at 1:10 PM Aldy Hernandez <aldyh@redhat.com> wrote: > > > > > > On Thu, Nov 25, 2021 at 12:57 PM Richard Biener > > > <richard.guenther@gmail.com> wrote: > > > > > > > > On Thu, Nov 25, 2021 at 11:55 AM Aldy Hernandez via Gcc-patches > > > > <gcc-patches@gcc.gnu.org> wrote: > > > > > > > > > > Andrew's patch for this PR103254 papered over some underlying > > > > > performance issues in the path solver that I'd like to address. > > > > > > > > > > We are currently solving the SSA's defined in the current block in > > > > > bitmap order, which amounts to random order for all purposes. This is > > > > > causing unnecessary recursion in gori. This patch changes the order > > > > > to gimple order, thus solving dependencies before uses. > > > > > > > > > > There is no change in threadable paths with this change. > > > > > > > > > > Tested on x86-64 & ppc64le Linux. > > > > > > > > > > gcc/ChangeLog: > > > > > > > > > > PR tree-optimization/103254 > > > > > * gimple-range-path.cc (path_range_query::compute_ranges_defined): New > > > > > (path_range_query::compute_ranges_in_block): Move to > > > > > compute_ranges_defined. > > > > > * gimple-range-path.h (compute_ranges_defined): New. > > > > > --- > > > > > gcc/gimple-range-path.cc | 33 ++++++++++++++++++++++----------- > > > > > gcc/gimple-range-path.h | 1 + > > > > > 2 files changed, 23 insertions(+), 11 deletions(-) > > > > > > > > > > diff --git a/gcc/gimple-range-path.cc b/gcc/gimple-range-path.cc > > > > > index 4aa666d2c8b..e24086691c4 100644 > > > > > --- a/gcc/gimple-range-path.cc > > > > > +++ b/gcc/gimple-range-path.cc > > > > > @@ -401,6 +401,27 @@ path_range_query::compute_ranges_in_phis (basic_block bb) > > > > > } > > > > > } > > > > > > > > > > +// Compute ranges defined in block. > > > > > + > > > > > +void > > > > > +path_range_query::compute_ranges_defined (basic_block bb) > > > > > +{ > > > > > + int_range_max r; > > > > > + > > > > > + compute_ranges_in_phis (bb); > > > > > + > > > > > + // Iterate in gimple order to minimize recursion. > > > > > + for (auto gsi = gsi_start_nondebug_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) > > > > > > > > gsi_next_nondebug (&gsi)? > > > > > > > > Of course this all has the extra cost of iterating over a possibly very large > > > > BB for just a few bits in m_imports? How often does m_imports have > > > > exactly one bit set? > > > > > > Hmmm, good point. > > > > > > Perhaps this isn't worth it then. I mean, the underlying bug I'm > > > tackling is an excess of outgoing edge ranges, not the excess > > > recursion this patch attacks. > > > > > > If you think the cost would be high for large ILs, I can revert the patch. > > > > I think so. If ordering is important then that should be achieved in some > > other ways (always a bit difficult for on-demand infrastructure). > > Nah, this isn't a correctness issue. It's not worth it. > > I will revert the patch. > > Thanks. > Aldy
diff --git a/gcc/gimple-range-path.cc b/gcc/gimple-range-path.cc index 4aa666d2c8b..e24086691c4 100644 --- a/gcc/gimple-range-path.cc +++ b/gcc/gimple-range-path.cc @@ -401,6 +401,27 @@ path_range_query::compute_ranges_in_phis (basic_block bb) } } +// Compute ranges defined in block. + +void +path_range_query::compute_ranges_defined (basic_block bb) +{ + int_range_max r; + + compute_ranges_in_phis (bb); + + // Iterate in gimple order to minimize recursion. + for (auto gsi = gsi_start_nondebug_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) + if (gimple_has_lhs (gsi_stmt (gsi))) + { + tree name = gimple_get_lhs (gsi_stmt (gsi)); + if (TREE_CODE (name) == SSA_NAME + && bitmap_bit_p (m_imports, SSA_NAME_VERSION (name)) + && range_defined_in_block (r, name, bb)) + set_cache (r, name); + } +} + // Compute ranges defined in the current block, or exported to the // next block. @@ -423,17 +444,7 @@ path_range_query::compute_ranges_in_block (basic_block bb) clear_cache (name); } - // Solve imports defined in this block, starting with the PHIs... - compute_ranges_in_phis (bb); - // ...and then the rest of the imports. - EXECUTE_IF_SET_IN_BITMAP (m_imports, 0, i, bi) - { - tree name = ssa_name (i); - - if (gimple_code (SSA_NAME_DEF_STMT (name)) != GIMPLE_PHI - && range_defined_in_block (r, name, bb)) - set_cache (r, name); - } + compute_ranges_defined (bb); if (at_exit ()) return; diff --git a/gcc/gimple-range-path.h b/gcc/gimple-range-path.h index 57a9ae9bdcd..81c87d475dd 100644 --- a/gcc/gimple-range-path.h +++ b/gcc/gimple-range-path.h @@ -58,6 +58,7 @@ private: // Methods to compute ranges for the given path. bool range_defined_in_block (irange &, tree name, basic_block bb); void compute_ranges_in_block (basic_block bb); + void compute_ranges_defined (basic_block bb); void compute_ranges_in_phis (basic_block bb); void adjust_for_non_null_uses (basic_block bb); void ssa_range_in_phi (irange &r, gphi *phi);