Message ID  20221113200553.4407281aldyh@redhat.com 

State  New 
Headers 
ReturnPath: <gccpatchesbounces+patchwork=sourceware.org@gcc.gnu.org> XOriginalTo: patchwork@sourceware.org DeliveredTo: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 14DD538582BE for <patchwork@sourceware.org>; Sun, 13 Nov 2022 20:06:40 +0000 (GMT) DKIMFilter: OpenDKIM Filter v2.11.0 sourceware.org 14DD538582BE DKIMSignature: v=1; a=rsasha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1668370000; bh=bYVE4JckG+vPhRnIfQUZGevxKO7X1+zfsyX0zfTl2xk=; h=To:Cc:Subject:Date:ListId:ListUnsubscribe:ListArchive: ListPost:ListHelp:ListSubscribe:From:ReplyTo:From; b=fmdq+cNckqe+FKVmHrObomtJVXP5kbMkT2wmI/WUs3Plk7FeaKDMOZLfHrHd8i1kc UD3+sW9p7qVhMP0SAy07HDqijzd+RILsyPo68HPW7oL8hRJN6vDwMyGyTtK5hfEkuo WFuLYMuZy6Zc7qX7wZpkFn/A45tFmDTU+PELVm6E= XOriginalTo: gccpatches@gcc.gnu.org DeliveredTo: gccpatches@gcc.gnu.org Received: from ussmtpdelivery124.mimecast.com (ussmtpdelivery124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTPS id 92C6D3857BB3 for <gccpatches@gcc.gnu.org>; Sun, 13 Nov 2022 20:06:07 +0000 (GMT) DMARCFilter: OpenDMARC Filter v1.4.1 sourceware.org 92C6D3857BB3 Received: from mimecastmx02.redhat.com (mimecastmx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id usmta34Y36HQ7YIM6WlTEV2xATkw1; Sun, 13 Nov 2022 15:06:03 0500 XMCUnique: Y36HQ7YIM6WlTEV2xATkw1 Received: from smtp.corp.redhat.com (intmx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDHAES256SHA (256/256 bits)) (No client certificate requested) by mimecastmx02.redhat.com (Postfix) with ESMTPS id 920958001B8 for <gccpatches@gcc.gnu.org>; Sun, 13 Nov 2022 20:06:03 +0000 (UTC) Received: from abulafia.quesejoda.com (unknown [10.39.192.47]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3B6C8111E3E4; Sun, 13 Nov 2022 20:06:03 +0000 (UTC) Received: from abulafia.quesejoda.com (localhost [127.0.0.1]) by abulafia.quesejoda.com (8.17.1/8.17.1) with ESMTPS id 2ADK5wHi440753 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Sun, 13 Nov 2022 21:05:58 +0100 Received: (from aldyh@localhost) by abulafia.quesejoda.com (8.17.1/8.17.1/Submit) id 2ADK5wAN440752; Sun, 13 Nov 2022 21:05:58 +0100 To: Jakub Jelinek <jakub@redhat.com> Cc: GCC patches <gccpatches@gcc.gnu.org>, Andrew MacLeod <amacleod@redhat.com>, Aldy Hernandez <aldyh@redhat.com> Subject: [PATCH] [rangeops] Implement sqrt. Date: Sun, 13 Nov 2022 21:05:53 +0100 MessageId: <20221113200553.4407281aldyh@redhat.com> MIMEVersion: 1.0 XScannedBy: MIMEDefang 3.1 on 10.11.54.3 XMimecastSpamScore: 0 XMimecastOriginator: redhat.com ContentTransferEncoding: 8bit ContentType: text/plain; charset="USASCII"; xdefault=true XSpamStatus: No, score=11.4 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 XSpamCheckerVersion: SpamAssassin 3.4.6 (20210409) on server2.sourceware.org XBeenThere: gccpatches@gcc.gnu.org XMailmanVersion: 2.1.29 Precedence: list ListId: Gccpatches mailing list <gccpatches.gcc.gnu.org> ListUnsubscribe: <https://gcc.gnu.org/mailman/options/gccpatches>, <mailto:gccpatchesrequest@gcc.gnu.org?subject=unsubscribe> ListArchive: <https://gcc.gnu.org/pipermail/gccpatches/> ListPost: <mailto:gccpatches@gcc.gnu.org> ListHelp: <mailto:gccpatchesrequest@gcc.gnu.org?subject=help> ListSubscribe: <https://gcc.gnu.org/mailman/listinfo/gccpatches>, <mailto:gccpatchesrequest@gcc.gnu.org?subject=subscribe> From: Aldy Hernandez via Gccpatches <gccpatches@gcc.gnu.org> ReplyTo: Aldy Hernandez <aldyh@redhat.com> ErrorsTo: gccpatchesbounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gccpatches" <gccpatchesbounces+patchwork=sourceware.org@gcc.gnu.org> 
Series 
[rangeops] Implement sqrt.


Commit Message
Aldy Hernandez
Nov. 13, 2022, 8:05 p.m. UTC
It seems SQRT is relatively straightforward, and it's something Jakub wanted for this release. Jakub, what do you think? p.s. Too tired to think about op1_range. gcc/ChangeLog: * gimplerangeop.cc (class cfn_sqrt): New. (gimple_range_op_handler::maybe_builtin_call): Add cases for sqrt.  gcc/gimplerangeop.cc  42 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 42 insertions(+)
Comments
On Sun, Nov 13, 2022 at 09:05:53PM +0100, Aldy Hernandez wrote: > It seems SQRT is relatively straightforward, and it's something Jakub > wanted for this release. > > Jakub, what do you think? > > p.s. Too tired to think about op1_range. That would be multiplication of the same value twice, i.e. fop_mult with trio that has op1_op2 () == VREL_EQ? But see below, as sqrt won't be always precise, we need to account for some errors. > gcc/ChangeLog: > > * gimplerangeop.cc (class cfn_sqrt): New. > (gimple_range_op_handler::maybe_builtin_call): Add cases for sqrt. Yes, I'd like to see SQRT support in. The only thing I'm worried is that unlike {+,,*,/}, negation etc. typically implemented in hardware or precise softfloat, sqrt is often implemented in library using multiple floating point arithmetic functions. And different implementations have different accuracy. So, I wonder if we don't need to add a target hook where targets will be able to provide upper bound on error for floating point functions for different floating point modes and some way to signal unknown accuracy/can't be trusted, in which case we would give up or return just the range for VARYING. Then, we could write some tests that say in a loop constructs random floating point values (perhaps sanitized to be nonNAN), calls libm function and the same mpfr one and return maximum error in ulps. And then record those, initially for glibc and most common targets and gradually maintainers could supply more. If we add an infrastructure for that within a few days, then we could start filling the details. One would hope that sqrt has < 10ulps accuracy if not already the 0.5ulp one, but for various other functions I think it can be much more. Oh, nanq ib libquadmath has terrible accuracy, but that one fortunately is not builtin... If we have some small integer for ulps accuracy of calls (we could use 0 for 0.5ulps accuracy aka precise), wonder if we'd handle it just as a loop of doing n times frange_nextafter or something smarter. >  a/gcc/gimplerangeop.cc > +++ b/gcc/gimplerangeop.cc > @@ 43,6 +43,7 @@ along with GCC; see the file COPYING3. If not see > #include "range.h" > #include "valuequery.h" > #include "gimplerange.h" > +#include "foldconstcall.h" > > // Given stmt S, fill VEC, up to VEC_SIZE elements, with relevant ssanames > // on the statement. For efficiency, it is an error to not pass in enough > @@ 301,6 +302,41 @@ public: > } > } op_cfn_constant_p; > > +// Implement range operator for SQRT. > +class cfn_sqrt : public range_operator_float > +{ > + using range_operator_float::fold_range; > +private: > + REAL_VALUE_TYPE real_sqrt (const REAL_VALUE_TYPE &arg, tree type) const > + { > + tree targ = build_real (type, arg); > + tree res = fold_const_call (as_combined_fn (BUILT_IN_SQRT), type, targ); > + return *TREE_REAL_CST_PTR (res); > + } > + void rv_fold (REAL_VALUE_TYPE &lb, REAL_VALUE_TYPE &ub, bool &maybe_nan, > + tree type, > + const REAL_VALUE_TYPE &lh_lb, > + const REAL_VALUE_TYPE &lh_ub, > + const REAL_VALUE_TYPE &, > + const REAL_VALUE_TYPE &, > + relation_kind) const final override > + { > + if (real_compare (LT_EXPR, &lh_ub, &dconst0)) > + { > + real_nan (&lb, "", 0, TYPE_MODE (type)); > + ub = lb; > + maybe_nan = true; > + return; > + } > + lb = real_sqrt (lh_lb, type); > + ub = real_sqrt (lh_ub, type); > + if (real_compare (GE_EXPR, &lh_lb, &dconst0)) > + maybe_nan = false; > + else > + maybe_nan = true; Doesn't this for say VARYING range result in [NAN, +INF] range? We want [0.0, +INF]. So perhaps the real_compare should be done before doing the real_sqrt calls and for the maybe_nan case use hardcoded 0.0 as lb? BTW, as for the ulps, another thing to test is whether even when the library has certain number of ulps error worst case whether it still obeys the basic math properties of the function or not. Say for sqrt that it always fits into [0.0, +INF] (guess because of the flush denormals to zero we wouldn't have a problem here for say 30ulps sqrt that [nextafter (0.0, 1.0) * 16.0, 64.0] wouldn't be considered [nextafter (0.0, 1.0) * 16.0, 8.0 + 30ulps] but just [0.0, 8.0 + 30ulps], but later on say sin/cos, which mathematically should have result always in [1.0, 1.0] +NAN, it would be interesting to see if there aren't some implementations that would happily return 1.0 + 15ulps or 1.0  20ulps. And unrelated thought about reverse y = x * x; if we know y's range  op1_range/op2_range in that case could be handled as sqrt without the library ulps treatment (if we assume that multiplication is always precise), but the question is if op1_range or op2_range is called at all in those cases and whether we could similarly use trio to derive in that case the x's range. Jakub
On Sun, Nov 13, 2022 at 9:39 PM Jakub Jelinek <jakub@redhat.com> wrote: > > On Sun, Nov 13, 2022 at 09:05:53PM +0100, Aldy Hernandez wrote: > > It seems SQRT is relatively straightforward, and it's something Jakub > > wanted for this release. > > > > Jakub, what do you think? > > > > p.s. Too tired to think about op1_range. > > That would be multiplication of the same value twice, i.e. > fop_mult with trio that has op1_op2 () == VREL_EQ? > But see below, as sqrt won't be always precise, we need to account for > some errors. > > > gcc/ChangeLog: > > > > * gimplerangeop.cc (class cfn_sqrt): New. > > (gimple_range_op_handler::maybe_builtin_call): Add cases for sqrt. > > Yes, I'd like to see SQRT support in. > The only thing I'm worried is that unlike {+,,*,/}, negation etc. typically > implemented in hardware or precise softfloat, sqrt is often implemented > in library using multiple floating point arithmetic functions. And different > implementations have different accuracy. > > So, I wonder if we don't need to add a target hook where targets will be > able to provide upper bound on error for floating point functions for > different floating point modes and some way to signal unknown accuracy/can't > be trusted, in which case we would give up or return just the range for > VARYING. > Then, we could write some tests that say in a loop constructs random > floating point values (perhaps sanitized to be nonNAN), calls libm function > and the same mpfr one and return maximum error in ulps. > And then record those, initially for glibc and most common targets and > gradually maintainers could supply more. > > If we add an infrastructure for that within a few days, then we could start > filling the details. One would hope that sqrt has < 10ulps accuracy if not > already the 0.5ulp one, but for various other functions I think it can be I don't know what would possess me to think that sqrt would be easy ;). Sure, I can sink a few days to flesh this out if you're willing to review it. Aldy
On 11/14/22 00:45, Aldy Hernandez via Gccpatches wrote: > On Sun, Nov 13, 2022 at 9:39 PM Jakub Jelinek <jakub@redhat.com> wrote: >> On Sun, Nov 13, 2022 at 09:05:53PM +0100, Aldy Hernandez wrote: >>> It seems SQRT is relatively straightforward, and it's something Jakub >>> wanted for this release. >>> >>> Jakub, what do you think? >>> >>> p.s. Too tired to think about op1_range. >> That would be multiplication of the same value twice, i.e. >> fop_mult with trio that has op1_op2 () == VREL_EQ? >> But see below, as sqrt won't be always precise, we need to account for >> some errors. >> >>> gcc/ChangeLog: >>> >>> * gimplerangeop.cc (class cfn_sqrt): New. >>> (gimple_range_op_handler::maybe_builtin_call): Add cases for sqrt. >> Yes, I'd like to see SQRT support in. >> The only thing I'm worried is that unlike {+,,*,/}, negation etc. typically >> implemented in hardware or precise softfloat, sqrt is often implemented >> in library using multiple floating point arithmetic functions. And different >> implementations have different accuracy. >> >> So, I wonder if we don't need to add a target hook where targets will be >> able to provide upper bound on error for floating point functions for >> different floating point modes and some way to signal unknown accuracy/can't >> be trusted, in which case we would give up or return just the range for >> VARYING. >> Then, we could write some tests that say in a loop constructs random >> floating point values (perhaps sanitized to be nonNAN), calls libm function >> and the same mpfr one and return maximum error in ulps. >> And then record those, initially for glibc and most common targets and >> gradually maintainers could supply more. >> >> If we add an infrastructure for that within a few days, then we could start >> filling the details. One would hope that sqrt has < 10ulps accuracy if not >> already the 0.5ulp one, but for various other functions I think it can be > I don't know what would possess me to think that sqrt would be easy > ;). Sure, I can sink a few days to flesh this out if you're willing > to review it. To Jakub's concern. I thought sqrt was treated like +/* WRT accuracy requirements by IEEE. ie, for any input there is a well defined answer for a confirming IEEE implementation. In fact, getting to that .5ulp bound is a significant amount of the cost for a NR or Goldschmidt (or hybrid) implementation if you've got a reasonable (say 12 or 14 bit) estimator and high performance fmacs. Jeff
On Mon, Nov 14, 2022 at 07:30:18AM 0700, Jeff Law via Gccpatches wrote: > To Jakub's concern. I thought sqrt was treated like +/* WRT accuracy > requirements by IEEE. ie, for any input there is a well defined answer for > a confirming IEEE implementation. In fact, getting to that .5ulp bound is > a significant amount of the cost for a NR or Goldschmidt (or hybrid) > implementation if you've got a reasonable (say 12 or 14 bit) estimator and > high performance fmacs. That might be the case (except for the known libquadmath sqrtq case PR105101 which fortunately is not a builtin). But we'll need to ulps infrastructure for other functions anyway and it would be nice to write a short testcase first that will test sqrt{,f,l,f32,f64,f128} and can be easily adjusted to test other functions. I'll try to cook something up tomorrow. Jakub
On 11/14/22 07:35, Jakub Jelinek wrote: > On Mon, Nov 14, 2022 at 07:30:18AM 0700, Jeff Law via Gccpatches wrote: >> To Jakub's concern. I thought sqrt was treated like +/* WRT accuracy >> requirements by IEEE. ie, for any input there is a well defined answer for >> a confirming IEEE implementation. In fact, getting to that .5ulp bound is >> a significant amount of the cost for a NR or Goldschmidt (or hybrid) >> implementation if you've got a reasonable (say 12 or 14 bit) estimator and >> high performance fmacs. > That might be the case (except for the known libquadmath sqrtq case > PR105101 which fortunately is not a builtin). > But we'll need to ulps infrastructure for other functions anyway and > it would be nice to write a short testcase first that will test > sqrt{,f,l,f32,f64,f128} and can be easily adjusted to test other functions. > I'll try to cook something up tomorrow. Agreed we'll need it elsewhere, so no objection to building it out if it's not going to delay things for sqrt. Jeff
Huh...no argument from me. Thanks. Aldy On Mon, Nov 14, 2022, 15:35 Jakub Jelinek <jakub@redhat.com> wrote: > On Mon, Nov 14, 2022 at 07:30:18AM 0700, Jeff Law via Gccpatches wrote: > > To Jakub's concern. I thought sqrt was treated like +/* WRT accuracy > > requirements by IEEE. ie, for any input there is a well defined answer > for > > a confirming IEEE implementation. In fact, getting to that .5ulp bound > is > > a significant amount of the cost for a NR or Goldschmidt (or hybrid) > > implementation if you've got a reasonable (say 12 or 14 bit) estimator > and > > high performance fmacs. > > That might be the case (except for the known libquadmath sqrtq case > PR105101 which fortunately is not a builtin). > But we'll need to ulps infrastructure for other functions anyway and > it would be nice to write a short testcase first that will test > sqrt{,f,l,f32,f64,f128} and can be easily adjusted to test other functions. > I'll try to cook something up tomorrow. > > Jakub > >
On Sun, 13 Nov 2022, Jakub Jelinek via Gccpatches wrote: > So, I wonder if we don't need to add a target hook where targets will be > able to provide upper bound on error for floating point functions for > different floating point modes and some way to signal unknown accuracy/can't > be trusted, in which case we would give up or return just the range for > VARYING. Note that the figures given in the glibc manual are purely empirical (largest errors observed for inputs in the glibc testsuite on a system that was then used to update the libmtestulps files); they don't constitute any kind of guarantee about either the current implementation or the API, nor are they formally verified, nor do they come from exhaustive testing (though worst cases from exhaustive testing for float may have been added to the glibc testsuite in some cases). (I think the only functions known to give huge errors for some inputs, outside of any IBM long double issues, are the Bessel functions and cpow functions. But even if other functions don't have huge errors, and some architecturespecific implementations might have issues, there are certainly some cases where errors can exceed the 9ulp threshold on what the libm tests will accept in libmtestulps files, which are thus considered glibc bugs. (That's 9ulp from the correctly rounded value, computed in ulp of that value. For IBM long double it's 16ulp instead, treating the format as having a fixed 106 bits of precision. Both figures are empirical ones chosen based on what bounds sufficed for most libm functions some years ago; ideally, with better implementations of some functions we could probably bring those numbers down.))
On Mon, Nov 14, 2022 at 09:55:29PM +0000, Joseph Myers wrote: > On Sun, 13 Nov 2022, Jakub Jelinek via Gccpatches wrote: > > > So, I wonder if we don't need to add a target hook where targets will be > > able to provide upper bound on error for floating point functions for > > different floating point modes and some way to signal unknown accuracy/can't > > be trusted, in which case we would give up or return just the range for > > VARYING. > > Note that the figures given in the glibc manual are purely empirical > (largest errors observed for inputs in the glibc testsuite on a system > that was then used to update the libmtestulps files); they don't > constitute any kind of guarantee about either the current implementation > or the API, nor are they formally verified, nor do they come from > exhaustive testing (though worst cases from exhaustive testing for float > may have been added to the glibc testsuite in some cases). (I think the > only functions known to give huge errors for some inputs, outside of any > IBM long double issues, are the Bessel functions and cpow functions. But > even if other functions don't have huge errors, and some > architecturespecific implementations might have issues, there are > certainly some cases where errors can exceed the 9ulp threshold on what > the libm tests will accept in libmtestulps files, which are thus > considered glibc bugs. (That's 9ulp from the correctly rounded value, > computed in ulp of that value. For IBM long double it's 16ulp instead, > treating the format as having a fixed 106 bits of precision. Both figures > are empirical ones chosen based on what bounds sufficed for most libm > functions some years ago; ideally, with better implementations of some > functions we could probably bring those numbers down.)) I know I can't get guarantees without formal proofs and even ulps from reported errors are better than randomized testing. But I think at least for nonglibc we want to be able to get a rough idea of the usual error range in ulps. This is what I came up with so far (link with gcc o ulptester{,.c} O2 lmpfr lm ), it still doesn't verify that functions are always within the mathematical range of results ([0.0, Inf] for sqrt, [1.0, 1.0] for sin/cos etc.), guess that would be useful and verify the program actually does what is intended. One can supply just one argument (number of tests, first 46 aren't really random) or two, in the latter case the second should be upward, downward or towardzero to use nondefault rounding mode. The idea is that we'd collect ballpark estimates for roundtonearest and then estimates for the other 3 rounding modes, the former would be used without froundingmath, max over all 4 rounding modes for froundingmath as gcc will compute using mpfr always in round to nearest. Jakub #ifdef THIS_TYPE static THIS_TYPE THIS_FUNC (ulp) (THIS_TYPE val) { if (__builtin_isnormal (val)) return THIS_FUNC (ldexp) (THIS_LIT (1.0), THIS_FUNC (ilogb) (val)  THIS_MANT_DIG + 1); else return THIS_FUNC (ldexp) (THIS_LIT (1.0), THIS_MIN_EXP  THIS_MANT_DIG); } static void THIS_FUNC (test) (THIS_TYPE (*fn) (THIS_TYPE), int (*mpfr_fn) (mpfr_ptr, mpfr_srcptr, mpfr_rnd_t), const char *name, unsigned long count) { char buf1[256], buf2[256], buf3[256]; mpfr_set_default_prec (THIS_MANT_DIG); mpfr_t v; mpfr_init2 (v, THIS_MANT_DIG); THIS_TYPE max_ulp = THIS_LIT (0.0); volatile THIS_TYPE val = THIS_LIT (0.0); int m = 0; for (unsigned long i = 0; i < count; ++i) { if (m == 0) m = 1; else if (m <= 10) { val = THIS_FUNC (nextafter) (val, THIS_LIT (1.0)); if ((m == 1 && val == THIS_MIN)  m > 1) ++m; } else if (m == 11) { val = THIS_LIT (1.0); for (int j = 0; j < 10; j++) val = THIS_FUNC (nextafter) (val, THIS_LIT (0.0)); ++m; } else if (m <= 32) { val = THIS_FUNC (nextafter) (val, THIS_LIT (1.0)); ++m; } else if (m == 33) { val = THIS_MAX; for (int j = 0; j < 10; j++) val = THIS_FUNC (nextafter) (val, THIS_LIT (0.0)); ++m; } else if (m <= 45) { val = THIS_FUNC (nextafter) (val, THIS_LIT (1.0)); ++m; } else val = THIS_FUNC (get_rand) (); if (__builtin_isnan (val)) continue; THIS_TYPE given = fn (val); THIS_MPFR_SET (v, val, MPFR_RNDN); mpfr_fn (v, v, MPFR_RNDN); THIS_TYPE expected = THIS_MPFR_GET (v, MPFR_RNDN); if ((!__builtin_isnan (given)) != (!__builtin_isnan (expected))  __builtin_isinf (given) != __builtin_isinf (expected)) { THIS_SNPRINTF (buf1, val); THIS_SNPRINTF (buf2, given); THIS_SNPRINTF (buf3, expected); printf ("%s (%s) = %s rather than %s\n", name, buf1, buf2, buf3); } else if (!__builtin_isnan (given) && !__builtin_isinf (given)) { THIS_TYPE this_ulp = THIS_FUNC (fabs) (given  expected) / THIS_FUNC (ulp) (expected); if (this_ulp > max_ulp) max_ulp = this_ulp; } } printf ("%s max error %.1fulp\n", name, (float) max_ulp); } #undef THIS_TYPE #undef THIS_LIT #undef THIS_FUNC #undef THIS_MIN_EXP #undef THIS_MANT_DIG #undef THIS_MIN #undef THIS_MAX #undef THIS_MPFR_SET #undef THIS_MPFR_GET #undef THIS_SNPRINTF #else #define _GNU_SOURCE 1 #include <stdio.h> #include <stdint.h> #include <stdlib.h> #include <string.h> #include <math.h> #include <fenv.h> #if defined(__FLT128_DIG__) && defined(__GLIBC_PREREQ) #if __GLIBC_PREREQ (2, 26) #define TEST_FLT128 #define MPFR_WANT_FLOAT128 #endif #endif #include <gmp.h> #include <mpfr.h> static long rand_n; static int rand_c; static uint32_t get_rand32 (void) { uint32_t ret = 0; if (rand_c == 0) { ret = random () & 0x7fffffff; rand_c = 31; } else ret = rand_n & (((uint32_t) 1 << rand_c)  1); ret <<= (32  rand_c); rand_n = random (); ret = rand_n & (((uint32_t) 1 << (32  rand_c))  1); rand_n >>= (32  rand_c); rand_c = 31  (32  rand_c); return ret; } static uint64_t get_rand64 (void) { return (((uint64_t) get_rand32 ()) << 32)  get_rand32 (); } static float get_randf (void) { uint32_t i = get_rand32 (); float f; memcpy (&f, &i, sizeof (f)); return f; } static double get_rand (void) { uint64_t i = get_rand64 (); double d; memcpy (&d, &i, sizeof (d)); return d; } static long double get_randl (void) { long double ld; uint64_t i = get_rand64 (); memcpy (&ld, &i, sizeof (i)); if (sizeof (long double) == 12) { uint32_t j = get_rand32 (); memcpy ((char *) &ld + 8, &j, sizeof (j)); } else if (sizeof (long double) == 16) { i = get_rand64 (); memcpy ((char *) &ld + 8, &i, sizeof (i)); } return ld; } #ifdef TEST_FLT128 static long double get_randf128 (void) { _Float128 f128; uint64_t i = get_rand64 (); memcpy (&f128, &i, sizeof (i)); i = get_rand64 (); memcpy ((char *) &f128 + 8, &i, sizeof (i)); return f128; } #endif #define THIS_TYPE float #define THIS_LIT(v) v##f #define THIS_FUNC(v) v##f #define THIS_MIN_EXP __FLT_MIN_EXP__ #define THIS_MANT_DIG __FLT_MANT_DIG__ #define THIS_MIN __FLT_MIN__ #define THIS_MAX __FLT_MAX__ #define THIS_MPFR_SET mpfr_set_flt #define THIS_MPFR_GET mpfr_get_flt #define THIS_SNPRINTF(buf, x) snprintf ((buf), sizeof (buf), "%a", (x)); #include "ulptester.c" #define THIS_TYPE double #define THIS_LIT(v) v #define THIS_FUNC(v) v #define THIS_MIN_EXP __DBL_MIN_EXP__ #define THIS_MANT_DIG __DBL_MANT_DIG__ #define THIS_MIN __DBL_MIN__ #define THIS_MAX __DBL_MAX__ #define THIS_MPFR_SET mpfr_set_d #define THIS_MPFR_GET mpfr_get_d #define THIS_SNPRINTF(buf, x) snprintf ((buf), sizeof (buf), "%a", (x)); #include "ulptester.c" #define THIS_TYPE long double #define THIS_LIT(v) v##L #define THIS_FUNC(v) v##l #define THIS_MIN_EXP __LDBL_MIN_EXP__ #define THIS_MANT_DIG __LDBL_MANT_DIG__ #define THIS_MIN __LDBL_MIN__ #define THIS_MAX __LDBL_MAX__ #define THIS_MPFR_SET mpfr_set_ld #define THIS_MPFR_GET mpfr_get_ld #define THIS_SNPRINTF(buf, x) snprintf ((buf), sizeof (buf), "%La", (x)); #include "ulptester.c" #ifdef TEST_FLT128 #define THIS_TYPE _Float128 #define THIS_LIT(v) v##F128 #define THIS_FUNC(v) v##f128 #define THIS_MIN_EXP __FLT128_MIN_EXP__ #define THIS_MANT_DIG __FLT128_MANT_DIG__ #define THIS_MIN __FLT128_MIN__ #define THIS_MAX __FLT128_MAX__ #define THIS_MPFR_SET mpfr_set_float128 #define THIS_MPFR_GET mpfr_get_float128 #define THIS_SNPRINTF(buf, x) strfromf128 ((buf), sizeof (buf), "%a", (x)); #include "ulptester.c" #else #define testf128(fn, mpfr_fn, name, count) do { } while (0) #endif int main (int argc, const char **argv) { const char *arg; char *endptr; (void) argc; if (argc <= 1) arg = ""; else arg = argv[1]; unsigned long count = strtoul (arg, &endptr, 10); if (endptr == arg) { fprintf (stderr, "ulptester number_of_iterations rnd\n"); return 1; } const char *rnd = "tonearest"; if (argc >= 3) rnd = argv[2]; if (strcmp (rnd, "upward") == 0) fesetround (FE_UPWARD); else if (strcmp (rnd, "downward") == 0) fesetround (FE_DOWNWARD); else if (strcmp (rnd, "towardzero") == 0) fesetround (FE_TOWARDZERO); #define TESTS(fn) \ testf (fn##f, mpfr_##fn, #fn "f", count); \ test (fn, mpfr_##fn, #fn, count); \ testl (fn##l, mpfr_##fn, #fn "l", count); \ testf128 (fn##f128, mpfr_##fn, #fn "f128", count) TESTS (sqrt); TESTS (sin); TESTS (cos); TESTS (exp10); } #endif
To go along with whatever magic we're gonna tack along to the rangeops sqrt implementation, here is another revision addressing the VARYING issue you pointed out. A few things... Instead of going through trees, I decided to call do_mpfr_arg1 directly. Let's not go the wide int <> tree rat hole in this one. The function do_mpfr_arg1 bails on +INF, so I had to handle it manually. There's a regression in gfortran.dg/ieee/ieee_6.f90, which I'm not sure how to handle. We are failing because we are calculating sqrt(1) and expecting certain IEEE flags set. These flags aren't set, presumably because we folded sqrt(1) into a NAN directly: // All negatives. if (real_compare (LT_EXPR, &lh_ub, &dconst0)) { real_nan (&lb, "", 0, TYPE_MODE (type)); ub = lb; maybe_nan = true; return; } The failing part of the test is: if (.not. (all(flags .eqv. [.false.,.false.,.true.,.true.,.false.]) & .or. all(flags .eqv. [.false.,.false.,.true.,.true.,.true.]) & .or. all(flags .eqv. [.false.,.false.,.true.,.false.,.false.]) & .or. all(flags .eqv. [.false.,.false.,.true.,.false.,.true.]))) STOP 5 But we are generating F F F F F. Google has informed me that that 3rd flag is IEEE_INVALID. So... is the optimization wrong? Are we not allowed to substitute that NAN if we know it's gonna happen? Should we also allow F F F F F in the test? Or something else? Thanks. Aldy On Wed, Nov 16, 2022 at 9:33 PM Jakub Jelinek <jakub@redhat.com> wrote: > > On Mon, Nov 14, 2022 at 09:55:29PM +0000, Joseph Myers wrote: > > On Sun, 13 Nov 2022, Jakub Jelinek via Gccpatches wrote: > > > > > So, I wonder if we don't need to add a target hook where targets will be > > > able to provide upper bound on error for floating point functions for > > > different floating point modes and some way to signal unknown accuracy/can't > > > be trusted, in which case we would give up or return just the range for > > > VARYING. > > > > Note that the figures given in the glibc manual are purely empirical > > (largest errors observed for inputs in the glibc testsuite on a system > > that was then used to update the libmtestulps files); they don't > > constitute any kind of guarantee about either the current implementation > > or the API, nor are they formally verified, nor do they come from > > exhaustive testing (though worst cases from exhaustive testing for float > > may have been added to the glibc testsuite in some cases). (I think the > > only functions known to give huge errors for some inputs, outside of any > > IBM long double issues, are the Bessel functions and cpow functions. But > > even if other functions don't have huge errors, and some > > architecturespecific implementations might have issues, there are > > certainly some cases where errors can exceed the 9ulp threshold on what > > the libm tests will accept in libmtestulps files, which are thus > > considered glibc bugs. (That's 9ulp from the correctly rounded value, > > computed in ulp of that value. For IBM long double it's 16ulp instead, > > treating the format as having a fixed 106 bits of precision. Both figures > > are empirical ones chosen based on what bounds sufficed for most libm > > functions some years ago; ideally, with better implementations of some > > functions we could probably bring those numbers down.)) > > I know I can't get guarantees without formal proofs and even ulps from > reported errors are better than randomized testing. > But I think at least for nonglibc we want to be able to get a rough idea > of the usual error range in ulps. > > This is what I came up with so far (link with > gcc o ulptester{,.c} O2 lmpfr lm > ), it still doesn't verify that functions are always within the mathematical > range of results ([0.0, Inf] for sqrt, [1.0, 1.0] for sin/cos etc.), guess > that would be useful and verify the program actually does what is intended. > One can supply just one argument (number of tests, first 46 aren't really > random) or two, in the latter case the second should be upward, downward or > towardzero to use nondefault rounding mode. > The idea is that we'd collect ballpark estimates for roundtonearest and > then estimates for the other 3 rounding modes, the former would be used > without froundingmath, max over all 4 rounding modes for froundingmath > as gcc will compute using mpfr always in round to nearest. > > Jakub
On 11/17/22 17:40, Aldy Hernandez wrote: > To go along with whatever magic we're gonna tack along to the > rangeops sqrt implementation, here is another revision addressing the > VARYING issue you pointed out. > > A few things... > > Instead of going through trees, I decided to call do_mpfr_arg1 > directly. Let's not go the wide int <> tree rat hole in this one. > > The function do_mpfr_arg1 bails on +INF, so I had to handle it manually. > > There's a regression in gfortran.dg/ieee/ieee_6.f90, which I'm not > sure how to handle. We are failing because we are calculating > sqrt(1) and expecting certain IEEE flags set. These flags aren't > set, presumably because we folded sqrt(1) into a NAN directly: > > // All negatives. > if (real_compare (LT_EXPR, &lh_ub, &dconst0)) > { > real_nan (&lb, "", 0, TYPE_MODE (type)); > ub = lb; > maybe_nan = true; > return; > } FWIW, we could return [0.0, +INF] +NAN which would keep us from eliding the sqrt, but it'd be a pity to keep the sqrt unless it's mandated by some IEEE canon. Aldy
This may be DCE. DOM uses ranger through simplify_using_ranges::fold_cond() to fold the following conditional to false, because we know x_185 is a NAN: x_185 = __builtin_sqrtf (1.0e+0); if (x_185 ord x_185) I believe we can do that, because there are no user observable effects. But DCE removes the sqrt which could trap: Eliminating unnecessary statements: Deleting : x_185 = __builtin_sqrtf (1.0e+0); Is DCE allowed to remove that sqrtf call? Thanks. Aldy On Thu, Nov 17, 2022 at 5:48 PM Aldy Hernandez <aldyh@redhat.com> wrote: > > > > On 11/17/22 17:40, Aldy Hernandez wrote: > > To go along with whatever magic we're gonna tack along to the > > rangeops sqrt implementation, here is another revision addressing the > > VARYING issue you pointed out. > > > > A few things... > > > > Instead of going through trees, I decided to call do_mpfr_arg1 > > directly. Let's not go the wide int <> tree rat hole in this one. > > > > The function do_mpfr_arg1 bails on +INF, so I had to handle it manually. > > > > There's a regression in gfortran.dg/ieee/ieee_6.f90, which I'm not > > sure how to handle. We are failing because we are calculating > > sqrt(1) and expecting certain IEEE flags set. These flags aren't > > set, presumably because we folded sqrt(1) into a NAN directly: > > > > // All negatives. > > if (real_compare (LT_EXPR, &lh_ub, &dconst0)) > > { > > real_nan (&lb, "", 0, TYPE_MODE (type)); > > ub = lb; > > maybe_nan = true; > > return; > > } > > FWIW, we could return [0.0, +INF] +NAN which would keep us from > eliding the sqrt, but it'd be a pity to keep the sqrt unless it's > mandated by some IEEE canon. > > Aldy
On Thu, 17 Nov 2022, Aldy Hernandez via Gccpatches wrote: > So... is the optimization wrong? Are we not allowed to substitute > that NAN if we know it's gonna happen? Should we also allow F F F F F > in the test? Or something else? This seems like the usual ambiguity about what transformations ftrappingmath (on by default) is meant to prevent. Generally it's understood to prevent transformations that add *or remove* exceptions, so folding a case that raises "invalid" to a NaN (with "invalid" no longer raised) is invalid with ftrappingmath. But that doesn't tend to be applied if the operation raising the exceptions has a result that is otherwise unused  in such a case the operation may still be removed completely (the exception isn't properly treated as a side effect to avoid dead code elimination; cf. Marc Glisse's ffenvaccess patches from August 2020). And it may often also not be applied to "inexact". There have been various past discussions of possible ways to split up the different effects of options such as ftrappingmath into finergrained options allowing more control of what transformations are permitted  see e.g. <https://gcc.gnu.org/pipermail/gccpatches/2021September/thread.html#580252> and bug 54192. There is also the question in that context of which suboptions should be enabled by default at all.
On Thu, Nov 17, 2022 at 06:59:45PM +0000, Joseph Myers wrote: > On Thu, 17 Nov 2022, Aldy Hernandez via Gccpatches wrote: > > > So... is the optimization wrong? Are we not allowed to substitute > > that NAN if we know it's gonna happen? Should we also allow F F F F F > > in the test? Or something else? > > This seems like the usual ambiguity about what transformations > ftrappingmath (on by default) is meant to prevent. > > Generally it's understood to prevent transformations that add *or remove* > exceptions, so folding a case that raises "invalid" to a NaN (with > "invalid" no longer raised) is invalid with ftrappingmath. But that > doesn't tend to be applied if the operation raising the exceptions has a > result that is otherwise unused  in such a case the operation may still > be removed completely (the exception isn't properly treated as a side > effect to avoid dead code elimination; cf. Marc Glisse's ffenvaccess > patches from August 2020). And it may often also not be applied to > "inexact". The problem is that the above model I'm afraid is largely incompatible with the optimizations ranger provides. A strict model where no operations that could raise exceptions are discarded is easy, we let frange optimize as much as it wants and just tell DCE not to eliminate operations that can raise exceptions. But in the model where some exceptions can be discarded if results are unused but not others where they are used, there is no way to distinguish between the result of the operation really isn't needed and ranger figured out a result (or usable range of something) and therefore the result of the operation isn't needed. Making frange more limited with ftrappingmath, making it punt for operations that could raise an exception would be quite drastic pessimization. Perhaps for ftrappingmath we could say no frange value is singleton and so at least for most of operations we actually wouldn't optimize out the whole computation when we know the result? Still, we could also just have r = long_computation (x, y, z); if (r > 42.0) and if frange figures out that r must be [256.0, 1024.0] and never NAN, we'd still happily optimize away the comparison. Jakub
On Thu, 17 Nov 2022, Jakub Jelinek via Gccpatches wrote: > On Thu, Nov 17, 2022 at 06:59:45PM +0000, Joseph Myers wrote: > > On Thu, 17 Nov 2022, Aldy Hernandez via Gccpatches wrote: > > > > > So... is the optimization wrong? Are we not allowed to substitute > > > that NAN if we know it's gonna happen? Should we also allow F F F F F > > > in the test? Or something else? > > > > This seems like the usual ambiguity about what transformations > > ftrappingmath (on by default) is meant to prevent. > > > > Generally it's understood to prevent transformations that add *or remove* > > exceptions, so folding a case that raises "invalid" to a NaN (with > > "invalid" no longer raised) is invalid with ftrappingmath. But that > > doesn't tend to be applied if the operation raising the exceptions has a > > result that is otherwise unused  in such a case the operation may still > > be removed completely (the exception isn't properly treated as a side > > effect to avoid dead code elimination; cf. Marc Glisse's ffenvaccess > > patches from August 2020). And it may often also not be applied to > > "inexact". > > The problem is that the above model I'm afraid is largely incompatible with > the optimizations ranger provides. That model is more an empirical description of when the nominal ftrappingmath semantics tend to be respected, than a coherent design for any kind of API commitment to what the option does or what the default trappingmath rules are.
On Thu, Nov 17, 2022 at 8:38 PM Jakub Jelinek via Gccpatches <gccpatches@gcc.gnu.org> wrote: > > On Thu, Nov 17, 2022 at 06:59:45PM +0000, Joseph Myers wrote: > > On Thu, 17 Nov 2022, Aldy Hernandez via Gccpatches wrote: > > > > > So... is the optimization wrong? Are we not allowed to substitute > > > that NAN if we know it's gonna happen? Should we also allow F F F F F > > > in the test? Or something else? > > > > This seems like the usual ambiguity about what transformations > > ftrappingmath (on by default) is meant to prevent. > > > > Generally it's understood to prevent transformations that add *or remove* > > exceptions, so folding a case that raises "invalid" to a NaN (with > > "invalid" no longer raised) is invalid with ftrappingmath. But that > > doesn't tend to be applied if the operation raising the exceptions has a > > result that is otherwise unused  in such a case the operation may still > > be removed completely (the exception isn't properly treated as a side > > effect to avoid dead code elimination; cf. Marc Glisse's ffenvaccess > > patches from August 2020). And it may often also not be applied to > > "inexact". > > The problem is that the above model I'm afraid is largely incompatible with > the optimizations ranger provides. > A strict model where no operations that could raise exceptions are discarded > is easy, we let frange optimize as much as it wants and just tell DCE not to > eliminate operations that can raise exceptions. > But in the model where some exceptions can be discarded if results are unused > but not others where they are used, there is no way to distinguish between > the result of the operation really isn't needed and ranger figured out a > result (or usable range of something) and therefore the result of the > operation isn't needed. > Making frange more limited with ftrappingmath, making it punt for > operations that could raise an exception would be quite drastic > pessimization. Perhaps for ftrappingmath we could say no frange value is > singleton and so at least for most of operations we actually wouldn't > optimize out the whole computation when we know the result? Still, we could > also just have > r = long_computation (x, y, z); > if (r > 42.0) > and if frange figures out that r must be [256.0, 1024.0] and never NAN, we'd > still happily optimize away the comparison. Yes, I don't think singling out the singleton case will help. Practically strictly preserving IEEE exceptions is only important for a very small audience, and for that even INEXACT will matter (but we still have ftrappingmath by default). For that audience likely all constant / range propagation is futile and thus the easiest thing might be to simply cut that off completely? I'd say what ranger does is reasonable with ftrappingmath given the current practice of handling this option. There's no point in trying to preserve the (by accident) "better" handling without ranger. Instead as Joseph says somebody would need to sit down, split ftrappingmath, adjust the default and thorougly document things (also with fnoncallexceptions which magically makes IEEE flag raising operations possibly throw exceptions). As there's currently no code motion barriers for FP code with respect to exception flag inspection any dead code we preserve is likely going to be unhelpful. So for now simply amend the documentation as to what ftrappingmath currently means with respect to range/constant propagation? Richard. > > Jakub >
On 11/18/22 09:39, Richard Biener wrote: > On Thu, Nov 17, 2022 at 8:38 PM Jakub Jelinek via Gccpatches > <gccpatches@gcc.gnu.org> wrote: >> >> On Thu, Nov 17, 2022 at 06:59:45PM +0000, Joseph Myers wrote: >>> On Thu, 17 Nov 2022, Aldy Hernandez via Gccpatches wrote: >>> >>>> So... is the optimization wrong? Are we not allowed to substitute >>>> that NAN if we know it's gonna happen? Should we also allow F F F F F >>>> in the test? Or something else? >>> >>> This seems like the usual ambiguity about what transformations >>> ftrappingmath (on by default) is meant to prevent. >>> >>> Generally it's understood to prevent transformations that add *or remove* >>> exceptions, so folding a case that raises "invalid" to a NaN (with >>> "invalid" no longer raised) is invalid with ftrappingmath. But that >>> doesn't tend to be applied if the operation raising the exceptions has a >>> result that is otherwise unused  in such a case the operation may still >>> be removed completely (the exception isn't properly treated as a side >>> effect to avoid dead code elimination; cf. Marc Glisse's ffenvaccess >>> patches from August 2020). And it may often also not be applied to >>> "inexact". >> >> The problem is that the above model I'm afraid is largely incompatible with >> the optimizations ranger provides. >> A strict model where no operations that could raise exceptions are discarded >> is easy, we let frange optimize as much as it wants and just tell DCE not to >> eliminate operations that can raise exceptions. >> But in the model where some exceptions can be discarded if results are unused >> but not others where they are used, there is no way to distinguish between >> the result of the operation really isn't needed and ranger figured out a >> result (or usable range of something) and therefore the result of the >> operation isn't needed. >> Making frange more limited with ftrappingmath, making it punt for >> operations that could raise an exception would be quite drastic >> pessimization. Perhaps for ftrappingmath we could say no frange value is >> singleton and so at least for most of operations we actually wouldn't >> optimize out the whole computation when we know the result? Still, we could >> also just have >> r = long_computation (x, y, z); >> if (r > 42.0) >> and if frange figures out that r must be [256.0, 1024.0] and never NAN, we'd >> still happily optimize away the comparison. > > Yes, I don't think singling out the singleton case will help. There is also simplify_using_ranges::fold_cond() which is used by VRP and DOM to fold conditionals. So twiddling frange::singleton_p will have no effect here since FP conditionals results are integers (f > 3.0 is true or false). And now that we're on this subject... We are very careful in frange (rangeopfloats.o) to avoid returning true/false in relational which may have a NAN. This keeps us from folding conditionals that may result in a trapping NAN. For example, if we see [if (x_5 unord_lt 10.0)...] and we know x_5 is [INF, 8.0] +NAN, this conditional is always true, but we return VARYING to avoid folding a NAN producing conditional. I wonder whether we're being too conservative? An alternative woudld be: z_8 = x_5 unord_lt 10.0 goto true_side; But if DCE is going to clean that up anyhow without regards to exceptions, then maybe we can fold these conditionals altogether? If not in this release, then in the next one. ISTM that rangeops should always tell the truth of what it knows, instead of being conservative wrt exceptions. It should be up to the clients (VRP or simplify_using_ranges::fold_cond) to use the information correctly. > Practically strictly > preserving IEEE exceptions is only important for a very small audience, and > for that even INEXACT will matter (but we still have ftrappingmath > by default). > For that audience likely all constant / range propagation is futile and thus the > easiest thing might be to simply cut that off completely? > > I'd say what ranger does is reasonable with ftrappingmath given the current > practice of handling this option. There's no point in trying to preserve the > (by accident) "better" handling without ranger. Instead as Joseph says somebody > would need to sit down, split ftrappingmath, adjust the default and thorougly > document things (also with fnoncallexceptions which magically makes > IEEE flag raising operations possibly throw exceptions). As there's currently > no code motion barriers for FP code with respect to exception flag inspection > any dead code we preserve is likely going to be unhelpful. > > So for now simply amend the documentation as to what ftrappingmath > currently means with respect to range/constant propagation? So something like "Even in the presence of ftrappingmath, VRP may fold operations that may cause exceptions For example, an addition that is guaranteed to produce a NAN, may be replaced with a NAN, thus eliding the addition. This may cause any exception that may have been generated by the addition to not appear in the final program." ?? Aldy
On Fri, Nov 18, 2022 at 11:37:42AM +0100, Aldy Hernandez wrote: > > Practically strictly > > preserving IEEE exceptions is only important for a very small audience, and > > for that even INEXACT will matter (but we still have ftrappingmath > > by default). > > For that audience likely all constant / range propagation is futile and thus the > > easiest thing might be to simply cut that off completely? > > > > I'd say what ranger does is reasonable with ftrappingmath given the current > > practice of handling this option. There's no point in trying to preserve the > > (by accident) "better" handling without ranger. Instead as Joseph says somebody > > would need to sit down, split ftrappingmath, adjust the default and thorougly > > document things (also with fnoncallexceptions which magically makes > > IEEE flag raising operations possibly throw exceptions). As there's currently > > no code motion barriers for FP code with respect to exception flag inspection > > any dead code we preserve is likely going to be unhelpful. > > > > So for now simply amend the documentation as to what ftrappingmath > > currently means with respect to range/constant propagation? > > So something like "Even in the presence of ftrappingmath, VRP may fold > operations that may cause exceptions For example, an addition that is > guaranteed to produce a NAN, may be replaced with a NAN, thus eliding the > addition. This may cause any exception that may have been generated by the > addition to not appear in the final program." > > ?? If we just adjust user expectations for ftrappingmath, shouldn't we introduce another option that will make sure we never optimize away floating point operations which can trap (and probably just disable frange for that mode)? Jakub
On 11/18/22 11:44, Jakub Jelinek wrote: > On Fri, Nov 18, 2022 at 11:37:42AM +0100, Aldy Hernandez wrote: >>> Practically strictly >>> preserving IEEE exceptions is only important for a very small audience, and >>> for that even INEXACT will matter (but we still have ftrappingmath >>> by default). >>> For that audience likely all constant / range propagation is futile and thus the >>> easiest thing might be to simply cut that off completely? >>> >>> I'd say what ranger does is reasonable with ftrappingmath given the current >>> practice of handling this option. There's no point in trying to preserve the >>> (by accident) "better" handling without ranger. Instead as Joseph says somebody >>> would need to sit down, split ftrappingmath, adjust the default and thorougly >>> document things (also with fnoncallexceptions which magically makes >>> IEEE flag raising operations possibly throw exceptions). As there's currently >>> no code motion barriers for FP code with respect to exception flag inspection >>> any dead code we preserve is likely going to be unhelpful. >>> >>> So for now simply amend the documentation as to what ftrappingmath >>> currently means with respect to range/constant propagation? >> >> So something like "Even in the presence of ftrappingmath, VRP may fold >> operations that may cause exceptions For example, an addition that is >> guaranteed to produce a NAN, may be replaced with a NAN, thus eliding the >> addition. This may cause any exception that may have been generated by the >> addition to not appear in the final program." >> >> ?? > > If we just adjust user expectations for ftrappingmath, shouldn't we > introduce another option that will make sure we never optimize away floating > point operations which can trap (and probably just disable frange for that > mode)? That seems like a big hammer, but sure. We could change frange::supports_p() to return false for flag_severely_limiting_option :). Aldy
I wonder if instead of disabling ranger altogether, we could disable code changes (constant propagation, jump threading and simplify_using_ranges)? Or does that sound like too much hassle? It seems that some passes (instruction selection?) could benefit from global ranges being available even if no propagation was done. Just a thought. I don't have strong opinions here. Aldy On Fri, Nov 18, 2022, 12:20 Aldy Hernandez <aldyh@redhat.com> wrote: > > > On 11/18/22 11:44, Jakub Jelinek wrote: > > On Fri, Nov 18, 2022 at 11:37:42AM +0100, Aldy Hernandez wrote: > >>> Practically strictly > >>> preserving IEEE exceptions is only important for a very small > audience, and > >>> for that even INEXACT will matter (but we still have ftrappingmath > >>> by default). > >>> For that audience likely all constant / range propagation is futile > and thus the > >>> easiest thing might be to simply cut that off completely? > >>> > >>> I'd say what ranger does is reasonable with ftrappingmath given the > current > >>> practice of handling this option. There's no point in trying to > preserve the > >>> (by accident) "better" handling without ranger. Instead as Joseph > says somebody > >>> would need to sit down, split ftrappingmath, adjust the default and > thorougly > >>> document things (also with fnoncallexceptions which magically makes > >>> IEEE flag raising operations possibly throw exceptions). As there's > currently > >>> no code motion barriers for FP code with respect to exception flag > inspection > >>> any dead code we preserve is likely going to be unhelpful. > >>> > >>> So for now simply amend the documentation as to what ftrappingmath > >>> currently means with respect to range/constant propagation? > >> > >> So something like "Even in the presence of ftrappingmath, VRP may fold > >> operations that may cause exceptions For example, an addition that is > >> guaranteed to produce a NAN, may be replaced with a NAN, thus eliding > the > >> addition. This may cause any exception that may have been generated by > the > >> addition to not appear in the final program." > >> > >> ?? > > > > If we just adjust user expectations for ftrappingmath, shouldn't we > > introduce another option that will make sure we never optimize away > floating > > point operations which can trap (and probably just disable frange for > that > > mode)? > > That seems like a big hammer, but sure. We could change > frange::supports_p() to return false for flag_severely_limiting_option :). > > Aldy >
> Am 18.11.2022 um 11:44 schrieb Jakub Jelinek <jakub@redhat.com>: > > On Fri, Nov 18, 2022 at 11:37:42AM +0100, Aldy Hernandez wrote: >>> Practically strictly >>> preserving IEEE exceptions is only important for a very small audience, and >>> for that even INEXACT will matter (but we still have ftrappingmath >>> by default). >>> For that audience likely all constant / range propagation is futile and thus the >>> easiest thing might be to simply cut that off completely? >>> >>> I'd say what ranger does is reasonable with ftrappingmath given the current >>> practice of handling this option. There's no point in trying to preserve the >>> (by accident) "better" handling without ranger. Instead as Joseph says somebody >>> would need to sit down, split ftrappingmath, adjust the default and thorougly >>> document things (also with fnoncallexceptions which magically makes >>> IEEE flag raising operations possibly throw exceptions). As there's currently >>> no code motion barriers for FP code with respect to exception flag inspection >>> any dead code we preserve is likely going to be unhelpful. >>> >>> So for now simply amend the documentation as to what ftrappingmath >>> currently means with respect to range/constant propagation? >> >> So something like "Even in the presence of ftrappingmath, VRP may fold >> operations that may cause exceptions For example, an addition that is >> guaranteed to produce a NAN, may be replaced with a NAN, thus eliding the >> addition. This may cause any exception that may have been generated by the >> addition to not appear in the final program." >> >> ?? > > If we just adjust user expectations for ftrappingmath, shouldn't we > introduce another option that will make sure we never optimize away floating > point operations which can trap (and probably just disable frange for that > mode)? I think it’s just like froundingmath and Fenv access  the intent is there but the implementation is known buggy (and disabling optimizations doesn’t fully fix it). Richard > Jakub >
diff git a/gcc/gimplerangeop.cc b/gcc/gimplerangeop.cc index 7764166d5fb..240cd8b6a11 100644  a/gcc/gimplerangeop.cc +++ b/gcc/gimplerangeop.cc @@ 43,6 +43,7 @@ along with GCC; see the file COPYING3. If not see #include "range.h" #include "valuequery.h" #include "gimplerange.h" +#include "foldconstcall.h" // Given stmt S, fill VEC, up to VEC_SIZE elements, with relevant ssanames // on the statement. For efficiency, it is an error to not pass in enough @@ 301,6 +302,41 @@ public: } } op_cfn_constant_p; +// Implement range operator for SQRT. +class cfn_sqrt : public range_operator_float +{ + using range_operator_float::fold_range; +private: + REAL_VALUE_TYPE real_sqrt (const REAL_VALUE_TYPE &arg, tree type) const + { + tree targ = build_real (type, arg); + tree res = fold_const_call (as_combined_fn (BUILT_IN_SQRT), type, targ); + return *TREE_REAL_CST_PTR (res); + } + void rv_fold (REAL_VALUE_TYPE &lb, REAL_VALUE_TYPE &ub, bool &maybe_nan, + tree type, + const REAL_VALUE_TYPE &lh_lb, + const REAL_VALUE_TYPE &lh_ub, + const REAL_VALUE_TYPE &, + const REAL_VALUE_TYPE &, + relation_kind) const final override + { + if (real_compare (LT_EXPR, &lh_ub, &dconst0)) + { + real_nan (&lb, "", 0, TYPE_MODE (type)); + ub = lb; + maybe_nan = true; + return; + } + lb = real_sqrt (lh_lb, type); + ub = real_sqrt (lh_ub, type); + if (real_compare (GE_EXPR, &lh_lb, &dconst0)) + maybe_nan = false; + else + maybe_nan = true; + } +} fop_cfn_sqrt; + // Implement range operator for CFN_BUILT_IN_SIGNBIT. class cfn_signbit : public range_operator_float { @@ 907,6 +943,12 @@ gimple_range_op_handler::maybe_builtin_call () m_int = &op_cfn_parity; break; + CASE_CFN_SQRT: + CASE_CFN_SQRT_FN: + m_valid = true; + m_float = &fop_cfn_sqrt; + break; + default: break; }