Message ID | f00d026e-a7e1-4d68-f57b-a0e657dd4b26@linux.ibm.com |
---|---|
State | New |
Headers |
Return-Path: <gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org> X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id A1FAD3858435 for <patchwork@sourceware.org>; Tue, 28 Sep 2021 08:16:53 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org A1FAD3858435 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1632817013; bh=C453BUPSpP3LHPX98KdRKLi4nj4pKOWhHWvim26DZyQ=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=WRIj9u3J/ifTT+UlZbcXZLztC8i05kXxrtpgBGZp2lv42xiR6jAI2gJUglxhM8QWM F2yLEdBdC0iDg6HECZfcWlPneYd+55HgV9oOhcrS9G9xtP3+01O2bWkoRQb5yU3Ng9 NffnhwHDYtuTFiKQho5TWQVhwmBtWVROWWoFhnro= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) by sourceware.org (Postfix) with ESMTPS id 55309385801A for <gcc-patches@gcc.gnu.org>; Tue, 28 Sep 2021 08:16:15 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 55309385801A Received: from pps.filterd (m0098410.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 18S6aiMV015496; Tue, 28 Sep 2021 04:16:14 -0400 Received: from pps.reinject (localhost [127.0.0.1]) by mx0a-001b2d01.pphosted.com with ESMTP id 3bbktqmsjc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 28 Sep 2021 04:16:14 -0400 Received: from m0098410.ppops.net (m0098410.ppops.net [127.0.0.1]) by pps.reinject (8.16.0.43/8.16.0.43) with SMTP id 18S89Cis008098; Tue, 28 Sep 2021 04:16:13 -0400 Received: from ppma05fra.de.ibm.com (6c.4a.5195.ip4.static.sl-reverse.com [149.81.74.108]) by mx0a-001b2d01.pphosted.com with ESMTP id 3bbktqmshg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 28 Sep 2021 04:16:13 -0400 Received: from pps.filterd (ppma05fra.de.ibm.com [127.0.0.1]) by ppma05fra.de.ibm.com (8.16.1.2/8.16.1.2) with SMTP id 18S8Bs1s003216; Tue, 28 Sep 2021 08:16:11 GMT Received: from b06cxnps3075.portsmouth.uk.ibm.com (d06relay10.portsmouth.uk.ibm.com [9.149.109.195]) by ppma05fra.de.ibm.com with ESMTP id 3b9ud9210b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 28 Sep 2021 08:16:11 +0000 Received: from d06av26.portsmouth.uk.ibm.com (d06av26.portsmouth.uk.ibm.com [9.149.105.62]) by b06cxnps3075.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id 18S8G8YC40173964 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Tue, 28 Sep 2021 08:16:08 GMT Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 5A706AE05A; Tue, 28 Sep 2021 08:16:08 +0000 (GMT) Received: from d06av26.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id A30A7AE045; Tue, 28 Sep 2021 08:16:06 +0000 (GMT) Received: from KewenLins-MacBook-Pro.local (unknown [9.200.52.49]) by d06av26.portsmouth.uk.ibm.com (Postfix) with ESMTP; Tue, 28 Sep 2021 08:16:06 +0000 (GMT) To: GCC Patches <gcc-patches@gcc.gnu.org> Subject: [PATCH v2] rs6000: Modify the way for extra penalized cost Message-ID: <f00d026e-a7e1-4d68-f57b-a0e657dd4b26@linux.ibm.com> Date: Tue, 28 Sep 2021 16:16:04 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0) Gecko/20100101 Thunderbird/78.10.0 Content-Type: text/plain; charset=gbk Content-Language: en-US X-TM-AS-GCONF: 00 X-Proofpoint-ORIG-GUID: dXdfllTAwpfEIqiPpU77uT3bJ-NCfzZs X-Proofpoint-GUID: 6kg5fnxyILeW6pPMN_Z5l-2poZkXTH6M Content-Transfer-Encoding: 7bit X-Proofpoint-UnRewURL: 0 URL was un-rewritten MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-09-28_04,2021-09-28_01,2020-04-07_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 adultscore=0 lowpriorityscore=0 bulkscore=0 priorityscore=1501 mlxscore=0 malwarescore=0 impostorscore=0 clxscore=1015 mlxlogscore=999 phishscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2109230001 definitions=main-2109280047 X-Spam-Status: No, score=-11.6 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list <gcc-patches.gcc.gnu.org> List-Unsubscribe: <https://gcc.gnu.org/mailman/options/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe> List-Archive: <https://gcc.gnu.org/pipermail/gcc-patches/> List-Post: <mailto:gcc-patches@gcc.gnu.org> List-Help: <mailto:gcc-patches-request@gcc.gnu.org?subject=help> List-Subscribe: <https://gcc.gnu.org/mailman/listinfo/gcc-patches>, <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe> From: "Kewen.Lin via Gcc-patches" <gcc-patches@gcc.gnu.org> Reply-To: "Kewen.Lin" <linkw@linux.ibm.com> Cc: Bill Schmidt <wschmidt@linux.ibm.com>, David Edelsohn <dje.gcc@gmail.com>, Segher Boessenkool <segher@kernel.crashing.org> Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" <gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org> |
Series |
[v2] rs6000: Modify the way for extra penalized cost
|
|
Commit Message
Kewen.Lin
Sept. 28, 2021, 8:16 a.m. UTC
Hi, This patch follows the discussions here[1][2], where Segher pointed out the existing way to guard the extra penalized cost for strided/elementwise loads with a magic bound does not scale. The way with nunits * stmt_cost can get one much exaggerated penalized cost, such as: for V16QI on P8, it's 16 * 20 = 320, that's why we need one bound. To make it better and more readable, the penalized cost is simplified as: unsigned adjusted_cost = (nunits == 2) ? 2 : 1; unsigned extra_cost = nunits * adjusted_cost; For V2DI/V2DF, it uses 2 penalized cost for each scalar load while for the other modes, it uses 1. It's mainly concluded from the performance evaluations. One thing might be related is that: More units vector gets constructed, more instructions are used. It has more chances to schedule them better (even run in parallelly when enough available units at that time), so it seems reasonable not to penalize more for them. The SPEC2017 evaluations on Power8/Power9/Power10 at option sets O2-vect and Ofast-unroll show this change is neutral. Bootstrapped and regress-tested on powerpc64le-linux-gnu Power9. Is it ok for trunk? [1] https://gcc.gnu.org/pipermail/gcc-patches/2021-September/579121.html [2] https://gcc.gnu.org/pipermail/gcc-patches/2021-September/580099.html v1: https://gcc.gnu.org/pipermail/gcc-patches/2021-September/579529.html BR, Kewen ----- gcc/ChangeLog: * config/rs6000/rs6000.c (rs6000_update_target_cost_per_stmt): Adjust the way to compute extra penalized cost. Remove useless parameter. (rs6000_add_stmt_cost): Adjust the call to function rs6000_update_target_cost_per_stmt. --- gcc/config/rs6000/rs6000.c | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-) -- 2.27.0
Comments
Hi, Gentle ping this: https://gcc.gnu.org/pipermail/gcc-patches/2021-September/580358.html BR, Kewen on 2021/9/28 下午4:16, Kewen.Lin via Gcc-patches wrote: > Hi, > > This patch follows the discussions here[1][2], where Segher > pointed out the existing way to guard the extra penalized > cost for strided/elementwise loads with a magic bound does > not scale. > > The way with nunits * stmt_cost can get one much > exaggerated penalized cost, such as: for V16QI on P8, it's > 16 * 20 = 320, that's why we need one bound. To make it > better and more readable, the penalized cost is simplified > as: > > unsigned adjusted_cost = (nunits == 2) ? 2 : 1; > unsigned extra_cost = nunits * adjusted_cost; > > For V2DI/V2DF, it uses 2 penalized cost for each scalar load > while for the other modes, it uses 1. It's mainly concluded > from the performance evaluations. One thing might be > related is that: More units vector gets constructed, more > instructions are used. It has more chances to schedule them > better (even run in parallelly when enough available units > at that time), so it seems reasonable not to penalize more > for them. > > The SPEC2017 evaluations on Power8/Power9/Power10 at option > sets O2-vect and Ofast-unroll show this change is neutral. > > Bootstrapped and regress-tested on powerpc64le-linux-gnu Power9. > > Is it ok for trunk? > > [1] https://gcc.gnu.org/pipermail/gcc-patches/2021-September/579121.html > [2] https://gcc.gnu.org/pipermail/gcc-patches/2021-September/580099.html > v1: https://gcc.gnu.org/pipermail/gcc-patches/2021-September/579529.html > > BR, > Kewen > ----- > gcc/ChangeLog: > > * config/rs6000/rs6000.c (rs6000_update_target_cost_per_stmt): Adjust > the way to compute extra penalized cost. Remove useless parameter. > (rs6000_add_stmt_cost): Adjust the call to function > rs6000_update_target_cost_per_stmt. > > > --- > gcc/config/rs6000/rs6000.c | 31 ++++++++++++++++++------------- > 1 file changed, 18 insertions(+), 13 deletions(-) > > diff --git a/gcc/config/rs6000/rs6000.c b/gcc/config/rs6000/rs6000.c > index dd42b0964f1..8200e1152c2 100644 > --- a/gcc/config/rs6000/rs6000.c > +++ b/gcc/config/rs6000/rs6000.c > @@ -5422,7 +5422,6 @@ rs6000_update_target_cost_per_stmt (rs6000_cost_data *data, > enum vect_cost_for_stmt kind, > struct _stmt_vec_info *stmt_info, > enum vect_cost_model_location where, > - int stmt_cost, > unsigned int orig_count) > { > > @@ -5462,17 +5461,23 @@ rs6000_update_target_cost_per_stmt (rs6000_cost_data *data, > { > tree vectype = STMT_VINFO_VECTYPE (stmt_info); > unsigned int nunits = vect_nunits_for_cost (vectype); > - unsigned int extra_cost = nunits * stmt_cost; > - /* As function rs6000_builtin_vectorization_cost shows, we have > - priced much on V16QI/V8HI vector construction as their units, > - if we penalize them with nunits * stmt_cost, it can result in > - an unreliable body cost, eg: for V16QI on Power8, stmt_cost > - is 20 and nunits is 16, the extra cost is 320 which looks > - much exaggerated. So let's use one maximum bound for the > - extra penalized cost for vector construction here. */ > - const unsigned int MAX_PENALIZED_COST_FOR_CTOR = 12; > - if (extra_cost > MAX_PENALIZED_COST_FOR_CTOR) > - extra_cost = MAX_PENALIZED_COST_FOR_CTOR; > + /* Don't expect strided/elementwise loads for just 1 nunit. */ > + gcc_assert (nunits > 1); > + /* i386 port adopts nunits * stmt_cost as the penalized cost > + for this kind of penalization, we used to follow it but > + found it could result in an unreliable body cost especially > + for V16QI/V8HI modes. To make it better, we choose this > + new heuristic: for each scalar load, we use 2 as penalized > + cost for the case with 2 nunits and use 1 for the other > + cases. It's without much supporting theory, mainly > + concluded from the broad performance evaluations on Power8, > + Power9 and Power10. One possibly related point is that: > + vector construction for more units would use more insns, > + it has more chances to schedule them better (even run in > + parallelly when enough available units at that time), so > + it seems reasonable not to penalize that much for them. */ > + unsigned int adjusted_cost = (nunits == 2) ? 2 : 1; > + unsigned int extra_cost = nunits * adjusted_cost; > data->extra_ctor_cost += extra_cost; > } > } > @@ -5510,7 +5515,7 @@ rs6000_add_stmt_cost (class vec_info *vinfo, void *data, int count, > cost_data->cost[where] += retval; > > rs6000_update_target_cost_per_stmt (cost_data, kind, stmt_info, where, > - stmt_cost, orig_count); > + orig_count); > } > > return retval; > -- > 2.27.0 >
Hi, Gentle ping this: https://gcc.gnu.org/pipermail/gcc-patches/2021-September/580358.html BR, Kewen > on 2021/9/28 下午4:16, Kewen.Lin via Gcc-patches wrote: >> Hi, >> >> This patch follows the discussions here[1][2], where Segher >> pointed out the existing way to guard the extra penalized >> cost for strided/elementwise loads with a magic bound does >> not scale. >> >> The way with nunits * stmt_cost can get one much >> exaggerated penalized cost, such as: for V16QI on P8, it's >> 16 * 20 = 320, that's why we need one bound. To make it >> better and more readable, the penalized cost is simplified >> as: >> >> unsigned adjusted_cost = (nunits == 2) ? 2 : 1; >> unsigned extra_cost = nunits * adjusted_cost; >> >> For V2DI/V2DF, it uses 2 penalized cost for each scalar load >> while for the other modes, it uses 1. It's mainly concluded >> from the performance evaluations. One thing might be >> related is that: More units vector gets constructed, more >> instructions are used. It has more chances to schedule them >> better (even run in parallelly when enough available units >> at that time), so it seems reasonable not to penalize more >> for them. >> >> The SPEC2017 evaluations on Power8/Power9/Power10 at option >> sets O2-vect and Ofast-unroll show this change is neutral. >> >> Bootstrapped and regress-tested on powerpc64le-linux-gnu Power9. >> >> Is it ok for trunk? >> >> [1] https://gcc.gnu.org/pipermail/gcc-patches/2021-September/579121.html >> [2] https://gcc.gnu.org/pipermail/gcc-patches/2021-September/580099.html >> v1: https://gcc.gnu.org/pipermail/gcc-patches/2021-September/579529.html >> >> BR, >> Kewen >> ----- >> gcc/ChangeLog: >> >> * config/rs6000/rs6000.c (rs6000_update_target_cost_per_stmt): Adjust >> the way to compute extra penalized cost. Remove useless parameter. >> (rs6000_add_stmt_cost): Adjust the call to function >> rs6000_update_target_cost_per_stmt. >> >> >> --- >> gcc/config/rs6000/rs6000.c | 31 ++++++++++++++++++------------- >> 1 file changed, 18 insertions(+), 13 deletions(-) >> >> diff --git a/gcc/config/rs6000/rs6000.c b/gcc/config/rs6000/rs6000.c >> index dd42b0964f1..8200e1152c2 100644 >> --- a/gcc/config/rs6000/rs6000.c >> +++ b/gcc/config/rs6000/rs6000.c >> @@ -5422,7 +5422,6 @@ rs6000_update_target_cost_per_stmt (rs6000_cost_data *data, >> enum vect_cost_for_stmt kind, >> struct _stmt_vec_info *stmt_info, >> enum vect_cost_model_location where, >> - int stmt_cost, >> unsigned int orig_count) >> { >> >> @@ -5462,17 +5461,23 @@ rs6000_update_target_cost_per_stmt (rs6000_cost_data *data, >> { >> tree vectype = STMT_VINFO_VECTYPE (stmt_info); >> unsigned int nunits = vect_nunits_for_cost (vectype); >> - unsigned int extra_cost = nunits * stmt_cost; >> - /* As function rs6000_builtin_vectorization_cost shows, we have >> - priced much on V16QI/V8HI vector construction as their units, >> - if we penalize them with nunits * stmt_cost, it can result in >> - an unreliable body cost, eg: for V16QI on Power8, stmt_cost >> - is 20 and nunits is 16, the extra cost is 320 which looks >> - much exaggerated. So let's use one maximum bound for the >> - extra penalized cost for vector construction here. */ >> - const unsigned int MAX_PENALIZED_COST_FOR_CTOR = 12; >> - if (extra_cost > MAX_PENALIZED_COST_FOR_CTOR) >> - extra_cost = MAX_PENALIZED_COST_FOR_CTOR; >> + /* Don't expect strided/elementwise loads for just 1 nunit. */ >> + gcc_assert (nunits > 1); >> + /* i386 port adopts nunits * stmt_cost as the penalized cost >> + for this kind of penalization, we used to follow it but >> + found it could result in an unreliable body cost especially >> + for V16QI/V8HI modes. To make it better, we choose this >> + new heuristic: for each scalar load, we use 2 as penalized >> + cost for the case with 2 nunits and use 1 for the other >> + cases. It's without much supporting theory, mainly >> + concluded from the broad performance evaluations on Power8, >> + Power9 and Power10. One possibly related point is that: >> + vector construction for more units would use more insns, >> + it has more chances to schedule them better (even run in >> + parallelly when enough available units at that time), so >> + it seems reasonable not to penalize that much for them. */ >> + unsigned int adjusted_cost = (nunits == 2) ? 2 : 1; >> + unsigned int extra_cost = nunits * adjusted_cost; >> data->extra_ctor_cost += extra_cost; >> } >> } >> @@ -5510,7 +5515,7 @@ rs6000_add_stmt_cost (class vec_info *vinfo, void *data, int count, >> cost_data->cost[where] += retval; >> >> rs6000_update_target_cost_per_stmt (cost_data, kind, stmt_info, where, >> - stmt_cost, orig_count); >> + orig_count); >> } >> >> return retval; >> -- >> 2.27.0 >>
Hi, Gentle ping this: https://gcc.gnu.org/pipermail/gcc-patches/2021-September/580358.html BR, Kewen >> on 2021/9/28 下午4:16, Kewen.Lin via Gcc-patches wrote: >>> Hi, >>> >>> This patch follows the discussions here[1][2], where Segher >>> pointed out the existing way to guard the extra penalized >>> cost for strided/elementwise loads with a magic bound does >>> not scale. >>> >>> The way with nunits * stmt_cost can get one much >>> exaggerated penalized cost, such as: for V16QI on P8, it's >>> 16 * 20 = 320, that's why we need one bound. To make it >>> better and more readable, the penalized cost is simplified >>> as: >>> >>> unsigned adjusted_cost = (nunits == 2) ? 2 : 1; >>> unsigned extra_cost = nunits * adjusted_cost; >>> >>> For V2DI/V2DF, it uses 2 penalized cost for each scalar load >>> while for the other modes, it uses 1. It's mainly concluded >>> from the performance evaluations. One thing might be >>> related is that: More units vector gets constructed, more >>> instructions are used. It has more chances to schedule them >>> better (even run in parallelly when enough available units >>> at that time), so it seems reasonable not to penalize more >>> for them. >>> >>> The SPEC2017 evaluations on Power8/Power9/Power10 at option >>> sets O2-vect and Ofast-unroll show this change is neutral. >>> >>> Bootstrapped and regress-tested on powerpc64le-linux-gnu Power9. >>> >>> Is it ok for trunk? >>> >>> [1] https://gcc.gnu.org/pipermail/gcc-patches/2021-September/579121.html >>> [2] https://gcc.gnu.org/pipermail/gcc-patches/2021-September/580099.html >>> v1: https://gcc.gnu.org/pipermail/gcc-patches/2021-September/579529.html >>> >>> BR, >>> Kewen >>> ----- >>> gcc/ChangeLog: >>> >>> * config/rs6000/rs6000.c (rs6000_update_target_cost_per_stmt): Adjust >>> the way to compute extra penalized cost. Remove useless parameter. >>> (rs6000_add_stmt_cost): Adjust the call to function >>> rs6000_update_target_cost_per_stmt. >>> >>> >>> --- >>> gcc/config/rs6000/rs6000.c | 31 ++++++++++++++++++------------- >>> 1 file changed, 18 insertions(+), 13 deletions(-) >>> >>> diff --git a/gcc/config/rs6000/rs6000.c b/gcc/config/rs6000/rs6000.c >>> index dd42b0964f1..8200e1152c2 100644 >>> --- a/gcc/config/rs6000/rs6000.c >>> +++ b/gcc/config/rs6000/rs6000.c >>> @@ -5422,7 +5422,6 @@ rs6000_update_target_cost_per_stmt (rs6000_cost_data *data, >>> enum vect_cost_for_stmt kind, >>> struct _stmt_vec_info *stmt_info, >>> enum vect_cost_model_location where, >>> - int stmt_cost, >>> unsigned int orig_count) >>> { >>> >>> @@ -5462,17 +5461,23 @@ rs6000_update_target_cost_per_stmt (rs6000_cost_data *data, >>> { >>> tree vectype = STMT_VINFO_VECTYPE (stmt_info); >>> unsigned int nunits = vect_nunits_for_cost (vectype); >>> - unsigned int extra_cost = nunits * stmt_cost; >>> - /* As function rs6000_builtin_vectorization_cost shows, we have >>> - priced much on V16QI/V8HI vector construction as their units, >>> - if we penalize them with nunits * stmt_cost, it can result in >>> - an unreliable body cost, eg: for V16QI on Power8, stmt_cost >>> - is 20 and nunits is 16, the extra cost is 320 which looks >>> - much exaggerated. So let's use one maximum bound for the >>> - extra penalized cost for vector construction here. */ >>> - const unsigned int MAX_PENALIZED_COST_FOR_CTOR = 12; >>> - if (extra_cost > MAX_PENALIZED_COST_FOR_CTOR) >>> - extra_cost = MAX_PENALIZED_COST_FOR_CTOR; >>> + /* Don't expect strided/elementwise loads for just 1 nunit. */ >>> + gcc_assert (nunits > 1); >>> + /* i386 port adopts nunits * stmt_cost as the penalized cost >>> + for this kind of penalization, we used to follow it but >>> + found it could result in an unreliable body cost especially >>> + for V16QI/V8HI modes. To make it better, we choose this >>> + new heuristic: for each scalar load, we use 2 as penalized >>> + cost for the case with 2 nunits and use 1 for the other >>> + cases. It's without much supporting theory, mainly >>> + concluded from the broad performance evaluations on Power8, >>> + Power9 and Power10. One possibly related point is that: >>> + vector construction for more units would use more insns, >>> + it has more chances to schedule them better (even run in >>> + parallelly when enough available units at that time), so >>> + it seems reasonable not to penalize that much for them. */ >>> + unsigned int adjusted_cost = (nunits == 2) ? 2 : 1; >>> + unsigned int extra_cost = nunits * adjusted_cost; >>> data->extra_ctor_cost += extra_cost; >>> } >>> } >>> @@ -5510,7 +5515,7 @@ rs6000_add_stmt_cost (class vec_info *vinfo, void *data, int count, >>> cost_data->cost[where] += retval; >>> >>> rs6000_update_target_cost_per_stmt (cost_data, kind, stmt_info, where, >>> - stmt_cost, orig_count); >>> + orig_count); >>> } >>> >>> return retval; >>> -- >>> 2.27.0 >>> >
Hi, Gentle ping this: https://gcc.gnu.org/pipermail/gcc-patches/2021-September/580358.html BR, Kewen >>> on 2021/9/28 下午4:16, Kewen.Lin via Gcc-patches wrote: >>>> Hi, >>>> >>>> This patch follows the discussions here[1][2], where Segher >>>> pointed out the existing way to guard the extra penalized >>>> cost for strided/elementwise loads with a magic bound does >>>> not scale. >>>> >>>> The way with nunits * stmt_cost can get one much >>>> exaggerated penalized cost, such as: for V16QI on P8, it's >>>> 16 * 20 = 320, that's why we need one bound. To make it >>>> better and more readable, the penalized cost is simplified >>>> as: >>>> >>>> unsigned adjusted_cost = (nunits == 2) ? 2 : 1; >>>> unsigned extra_cost = nunits * adjusted_cost; >>>> >>>> For V2DI/V2DF, it uses 2 penalized cost for each scalar load >>>> while for the other modes, it uses 1. It's mainly concluded >>>> from the performance evaluations. One thing might be >>>> related is that: More units vector gets constructed, more >>>> instructions are used. It has more chances to schedule them >>>> better (even run in parallelly when enough available units >>>> at that time), so it seems reasonable not to penalize more >>>> for them. >>>> >>>> The SPEC2017 evaluations on Power8/Power9/Power10 at option >>>> sets O2-vect and Ofast-unroll show this change is neutral. >>>> >>>> Bootstrapped and regress-tested on powerpc64le-linux-gnu Power9. >>>> >>>> Is it ok for trunk? >>>> >>>> [1] https://gcc.gnu.org/pipermail/gcc-patches/2021-September/579121.html >>>> [2] https://gcc.gnu.org/pipermail/gcc-patches/2021-September/580099.html >>>> v1: https://gcc.gnu.org/pipermail/gcc-patches/2021-September/579529.html >>>> >>>> BR, >>>> Kewen >>>> ----- >>>> gcc/ChangeLog: >>>> >>>> * config/rs6000/rs6000.c (rs6000_update_target_cost_per_stmt): Adjust >>>> the way to compute extra penalized cost. Remove useless parameter. >>>> (rs6000_add_stmt_cost): Adjust the call to function >>>> rs6000_update_target_cost_per_stmt. >>>> >>>> >>>> --- >>>> gcc/config/rs6000/rs6000.c | 31 ++++++++++++++++++------------- >>>> 1 file changed, 18 insertions(+), 13 deletions(-) >>>> >>>> diff --git a/gcc/config/rs6000/rs6000.c b/gcc/config/rs6000/rs6000.c >>>> index dd42b0964f1..8200e1152c2 100644 >>>> --- a/gcc/config/rs6000/rs6000.c >>>> +++ b/gcc/config/rs6000/rs6000.c >>>> @@ -5422,7 +5422,6 @@ rs6000_update_target_cost_per_stmt (rs6000_cost_data *data, >>>> enum vect_cost_for_stmt kind, >>>> struct _stmt_vec_info *stmt_info, >>>> enum vect_cost_model_location where, >>>> - int stmt_cost, >>>> unsigned int orig_count) >>>> { >>>> >>>> @@ -5462,17 +5461,23 @@ rs6000_update_target_cost_per_stmt (rs6000_cost_data *data, >>>> { >>>> tree vectype = STMT_VINFO_VECTYPE (stmt_info); >>>> unsigned int nunits = vect_nunits_for_cost (vectype); >>>> - unsigned int extra_cost = nunits * stmt_cost; >>>> - /* As function rs6000_builtin_vectorization_cost shows, we have >>>> - priced much on V16QI/V8HI vector construction as their units, >>>> - if we penalize them with nunits * stmt_cost, it can result in >>>> - an unreliable body cost, eg: for V16QI on Power8, stmt_cost >>>> - is 20 and nunits is 16, the extra cost is 320 which looks >>>> - much exaggerated. So let's use one maximum bound for the >>>> - extra penalized cost for vector construction here. */ >>>> - const unsigned int MAX_PENALIZED_COST_FOR_CTOR = 12; >>>> - if (extra_cost > MAX_PENALIZED_COST_FOR_CTOR) >>>> - extra_cost = MAX_PENALIZED_COST_FOR_CTOR; >>>> + /* Don't expect strided/elementwise loads for just 1 nunit. */ >>>> + gcc_assert (nunits > 1); >>>> + /* i386 port adopts nunits * stmt_cost as the penalized cost >>>> + for this kind of penalization, we used to follow it but >>>> + found it could result in an unreliable body cost especially >>>> + for V16QI/V8HI modes. To make it better, we choose this >>>> + new heuristic: for each scalar load, we use 2 as penalized >>>> + cost for the case with 2 nunits and use 1 for the other >>>> + cases. It's without much supporting theory, mainly >>>> + concluded from the broad performance evaluations on Power8, >>>> + Power9 and Power10. One possibly related point is that: >>>> + vector construction for more units would use more insns, >>>> + it has more chances to schedule them better (even run in >>>> + parallelly when enough available units at that time), so >>>> + it seems reasonable not to penalize that much for them. */ >>>> + unsigned int adjusted_cost = (nunits == 2) ? 2 : 1; >>>> + unsigned int extra_cost = nunits * adjusted_cost; >>>> data->extra_ctor_cost += extra_cost; >>>> } >>>> } >>>> @@ -5510,7 +5515,7 @@ rs6000_add_stmt_cost (class vec_info *vinfo, void *data, int count, >>>> cost_data->cost[where] += retval; >>>> >>>> rs6000_update_target_cost_per_stmt (cost_data, kind, stmt_info, where, >>>> - stmt_cost, orig_count); >>>> + orig_count); >>>> } >>>> >>>> return retval; >>>> -- >>>> 2.27.0 >>>>
Hi! On Tue, Sep 28, 2021 at 04:16:04PM +0800, Kewen.Lin wrote: > This patch follows the discussions here[1][2], where Segher > pointed out the existing way to guard the extra penalized > cost for strided/elementwise loads with a magic bound does > not scale. > > The way with nunits * stmt_cost can get one much > exaggerated penalized cost, such as: for V16QI on P8, it's > 16 * 20 = 320, that's why we need one bound. To make it > better and more readable, the penalized cost is simplified > as: > > unsigned adjusted_cost = (nunits == 2) ? 2 : 1; > unsigned extra_cost = nunits * adjusted_cost; > For V2DI/V2DF, it uses 2 penalized cost for each scalar load > while for the other modes, it uses 1. So for V2D[IF] we get 4, for V4S[IF] we get 4, for V8HI it's 8, and for V16QI it is 16? Pretty terrible as well, heh (I would expect all vector ops to be similar cost). > It's mainly concluded > from the performance evaluations. One thing might be > related is that: More units vector gets constructed, more > instructions are used. Yes, but how often does that happen, compared to actual vector ops? This also suggests we should cost vector construction separately, which would pretty obviously be a good thing anyway (it happens often, it has a quite different cost structure). > It has more chances to schedule them > better (even run in parallelly when enough available units > at that time), so it seems reasonable not to penalize more > for them. Yes. > + /* Don't expect strided/elementwise loads for just 1 nunit. */ "We don't expect" etc. Okay for trunk. Thanks! This probably isn't the last word in this story, but it is an improvement in any case :-) Segher
Hi Segher, on 2021/11/30 上午6:06, Segher Boessenkool wrote: > Hi! > > On Tue, Sep 28, 2021 at 04:16:04PM +0800, Kewen.Lin wrote: >> This patch follows the discussions here[1][2], where Segher >> pointed out the existing way to guard the extra penalized >> cost for strided/elementwise loads with a magic bound does >> not scale. >> >> The way with nunits * stmt_cost can get one much >> exaggerated penalized cost, such as: for V16QI on P8, it's >> 16 * 20 = 320, that's why we need one bound. To make it >> better and more readable, the penalized cost is simplified >> as: >> >> unsigned adjusted_cost = (nunits == 2) ? 2 : 1; >> unsigned extra_cost = nunits * adjusted_cost; > >> For V2DI/V2DF, it uses 2 penalized cost for each scalar load >> while for the other modes, it uses 1. > > So for V2D[IF] we get 4, for V4S[IF] we get 4, for V8HI it's 8, and > for V16QI it is 16? Pretty terrible as well, heh (I would expect all > vector ops to be similar cost). > But for different vector units it has different number of loads, it seems reasonable to have more costs when it has more loads to be fed into those limited number of load/store units. >> It's mainly concluded >> from the performance evaluations. One thing might be >> related is that: More units vector gets constructed, more >> instructions are used. > > Yes, but how often does that happen, compared to actual vector ops? > > This also suggests we should cost vector construction separately, which > would pretty obviously be a good thing anyway (it happens often, it has > a quite different cost structure). > vectorizer does model vector construction separately, there is an enum vect_cost_for_stmt *vec_construct*, normally it works well. But for this bwaves hotspot, it requires us to do some more penalization as evaluated, so we put the penalized cost onto this special vector construction when some heuristic thresholds are met. >> It has more chances to schedule them >> better (even run in parallelly when enough available units >> at that time), so it seems reasonable not to penalize more >> for them. > > Yes. > >> + /* Don't expect strided/elementwise loads for just 1 nunit. */ > > "We don't expect" etc. > Fixed. > Okay for trunk. Thanks! This probably isn't the last word in this > story, but it is an improvement in any case :-) > > Thanks for the review, rebased/retested and committed as r12-5589. BR, Kewen
Hi! On Tue, Nov 30, 2021 at 01:05:48PM +0800, Kewen.Lin wrote: > on 2021/11/30 上午6:06, Segher Boessenkool wrote: > > On Tue, Sep 28, 2021 at 04:16:04PM +0800, Kewen.Lin wrote: > >> unsigned adjusted_cost = (nunits == 2) ? 2 : 1; > >> unsigned extra_cost = nunits * adjusted_cost; > > > >> For V2DI/V2DF, it uses 2 penalized cost for each scalar load > >> while for the other modes, it uses 1. > > > > So for V2D[IF] we get 4, for V4S[IF] we get 4, for V8HI it's 8, and > > for V16QI it is 16? Pretty terrible as well, heh (I would expect all > > vector ops to be similar cost). > > But for different vector units it has different number of loads, it seems > reasonable to have more costs when it has more loads to be fed into those > limited number of load/store units. More expensive, yes. This expensive? That doesn't look optimal :-) > > This also suggests we should cost vector construction separately, which > > would pretty obviously be a good thing anyway (it happens often, it has > > a quite different cost structure). > > vectorizer does model vector construction separately, there is an enum > vect_cost_for_stmt *vec_construct*, normally it works well. But for this > bwaves hotspot, it requires us to do some more penalization as evaluated, > so we put the penalized cost onto this special vector construction when > some heuristic thresholds are met. Ah, heuristics. We can adjust them forever :-) Segher
diff --git a/gcc/config/rs6000/rs6000.c b/gcc/config/rs6000/rs6000.c index dd42b0964f1..8200e1152c2 100644 --- a/gcc/config/rs6000/rs6000.c +++ b/gcc/config/rs6000/rs6000.c @@ -5422,7 +5422,6 @@ rs6000_update_target_cost_per_stmt (rs6000_cost_data *data, enum vect_cost_for_stmt kind, struct _stmt_vec_info *stmt_info, enum vect_cost_model_location where, - int stmt_cost, unsigned int orig_count) { @@ -5462,17 +5461,23 @@ rs6000_update_target_cost_per_stmt (rs6000_cost_data *data, { tree vectype = STMT_VINFO_VECTYPE (stmt_info); unsigned int nunits = vect_nunits_for_cost (vectype); - unsigned int extra_cost = nunits * stmt_cost; - /* As function rs6000_builtin_vectorization_cost shows, we have - priced much on V16QI/V8HI vector construction as their units, - if we penalize them with nunits * stmt_cost, it can result in - an unreliable body cost, eg: for V16QI on Power8, stmt_cost - is 20 and nunits is 16, the extra cost is 320 which looks - much exaggerated. So let's use one maximum bound for the - extra penalized cost for vector construction here. */ - const unsigned int MAX_PENALIZED_COST_FOR_CTOR = 12; - if (extra_cost > MAX_PENALIZED_COST_FOR_CTOR) - extra_cost = MAX_PENALIZED_COST_FOR_CTOR; + /* Don't expect strided/elementwise loads for just 1 nunit. */ + gcc_assert (nunits > 1); + /* i386 port adopts nunits * stmt_cost as the penalized cost + for this kind of penalization, we used to follow it but + found it could result in an unreliable body cost especially + for V16QI/V8HI modes. To make it better, we choose this + new heuristic: for each scalar load, we use 2 as penalized + cost for the case with 2 nunits and use 1 for the other + cases. It's without much supporting theory, mainly + concluded from the broad performance evaluations on Power8, + Power9 and Power10. One possibly related point is that: + vector construction for more units would use more insns, + it has more chances to schedule them better (even run in + parallelly when enough available units at that time), so + it seems reasonable not to penalize that much for them. */ + unsigned int adjusted_cost = (nunits == 2) ? 2 : 1; + unsigned int extra_cost = nunits * adjusted_cost; data->extra_ctor_cost += extra_cost; } } @@ -5510,7 +5515,7 @@ rs6000_add_stmt_cost (class vec_info *vinfo, void *data, int count, cost_data->cost[where] += retval; rs6000_update_target_cost_per_stmt (cost_data, kind, stmt_info, where, - stmt_cost, orig_count); + orig_count); } return retval;