[4/4] math: Improve fmodf
Checks
Context |
Check |
Description |
dj/TryBot-apply_patch |
success
|
Patch applied to master at the time it was sent
|
dj/TryBot-32bit |
success
|
Build for i686
|
Commit Message
This uses a new algorithm similar to already proposed earlier [1].
With x = mx * 2^ex and y = my * 2^ey (mx, my, ex, ey being integers),
the simplest implementation is:
mx * 2^ex == 2 * mx * 2^(ex - 1)
while (ex > ey)
{
mx *= 2;
--ex;
mx %= my;
}
With mx/my being mantissa of double floating pointer, on each step the
argument reduction can be improved 8 (which is sizeof of uint32_t minus
MANTISSA_WIDTH plus the signal bit):
while (ex > ey)
{
mx << 8;
ex -= 8;
mx %= my;
} */
The implementation uses builtin clz and ctz, along with shifts to
convert hx/hy back to doubles. Different than the original patch,
this path assume modulo/divide operation is slow, so use multiplication
with invert values.
I see the following performance improvements using fmod benchtests
(result only show the 'mean' result):
Architecture | Input | master | patch
-----------------|-----------------|----------|--------
x86_64 (Ryzen 9) | subnormals | 17.2549 | 12.3214
x86_64 (Ryzen 9) | normal | 85.4096 | 52.6625
x86_64 (Ryzen 9) | close-exponents | 19.1072 | 17.4622
aarch64 (N1) | subnormal | 10.2182 | 6.81778
aarch64 (N1) | normal | 60.0616 | 158.339
aarch64 (N1) | close-exponents | 11.5256 | 8.67894
I also see similar improvements on arm-linux-gnueabihf when running on
the N1 aarch64 chips, where it a lot of soft-fp implementation (for
modulo, and multiplication):
Architecture | Input | master | patch
-----------------|-----------------|----------|--------
armhf (N1) | subnormal | 11.6662 | 10.8955
armhf (N1) | normal | 69.2759 | 35.4184
armhf (N1) | close-exponents | 13.6472 | 17.8539
Instead of using the math_private.h definitions, I used the
math_config.h instead which is used on newer math implementations.
Co-authored-by: kirill <kirill.okhotnikov@gmail.com>
[1] https://sourceware.org/pipermail/libc-alpha/2020-November/119794.html
---
sysdeps/ieee754/flt-32/e_fmodf.c | 230 ++++++++++++++++-----------
sysdeps/ieee754/flt-32/math_config.h | 89 +++++++++++
2 files changed, 226 insertions(+), 93 deletions(-)
Comments
On Fri, Mar 10, 2023 at 9:59 AM Adhemerval Zanella
<adhemerval.zanella@linaro.org> wrote:
>
> This uses a new algorithm similar to already proposed earlier [1].
> With x = mx * 2^ex and y = my * 2^ey (mx, my, ex, ey being integers),
> the simplest implementation is:
>
> mx * 2^ex == 2 * mx * 2^(ex - 1)
>
> while (ex > ey)
> {
> mx *= 2;
> --ex;
> mx %= my;
> }
>
> With mx/my being mantissa of double floating pointer, on each step the
> argument reduction can be improved 8 (which is sizeof of uint32_t minus
> MANTISSA_WIDTH plus the signal bit):
>
> while (ex > ey)
> {
> mx << 8;
> ex -= 8;
> mx %= my;
> } */
>
> The implementation uses builtin clz and ctz, along with shifts to
> convert hx/hy back to doubles. Different than the original patch,
Should all references to double be float?
> this path assume modulo/divide operation is slow, so use multiplication
> with invert values.
>
> I see the following performance improvements using fmod benchtests
fmodf?
Thanks for your work.
> (result only show the 'mean' result):
>
> Architecture | Input | master | patch
> -----------------|-----------------|----------|--------
> x86_64 (Ryzen 9) | subnormals | 17.2549 | 12.3214
> x86_64 (Ryzen 9) | normal | 85.4096 | 52.6625
> x86_64 (Ryzen 9) | close-exponents | 19.1072 | 17.4622
> aarch64 (N1) | subnormal | 10.2182 | 6.81778
> aarch64 (N1) | normal | 60.0616 | 158.339
> aarch64 (N1) | close-exponents | 11.5256 | 8.67894
>
> I also see similar improvements on arm-linux-gnueabihf when running on
> the N1 aarch64 chips, where it a lot of soft-fp implementation (for
> modulo, and multiplication):
>
> Architecture | Input | master | patch
> -----------------|-----------------|----------|--------
> armhf (N1) | subnormal | 11.6662 | 10.8955
> armhf (N1) | normal | 69.2759 | 35.4184
> armhf (N1) | close-exponents | 13.6472 | 17.8539
>
> Instead of using the math_private.h definitions, I used the
> math_config.h instead which is used on newer math implementations.
>
> Co-authored-by: kirill <kirill.okhotnikov@gmail.com>
>
> [1] https://sourceware.org/pipermail/libc-alpha/2020-November/119794.html
> ---
> sysdeps/ieee754/flt-32/e_fmodf.c | 230 ++++++++++++++++-----------
> sysdeps/ieee754/flt-32/math_config.h | 89 +++++++++++
> 2 files changed, 226 insertions(+), 93 deletions(-)
>
> diff --git a/sysdeps/ieee754/flt-32/e_fmodf.c b/sysdeps/ieee754/flt-32/e_fmodf.c
> index b71c4f754f..b516605ce7 100644
> --- a/sysdeps/ieee754/flt-32/e_fmodf.c
> +++ b/sysdeps/ieee754/flt-32/e_fmodf.c
> @@ -1,102 +1,146 @@
> -/* e_fmodf.c -- float version of e_fmod.c.
> - */
> -
> -/*
> - * ====================================================
> - * Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved.
> - *
> - * Developed at SunPro, a Sun Microsystems, Inc. business.
> - * Permission to use, copy, modify, and distribute this
> - * software is freely granted, provided that this notice
> - * is preserved.
> - * ====================================================
> - */
> -
> -/*
> - * __ieee754_fmodf(x,y)
> - * Return x mod y in exact arithmetic
> - * Method: shift and subtract
> - */
> +/* Floating-point remainder function.
> + Copyright (C) 2023 Free Software Foundation, Inc.
> + This file is part of the GNU C Library.
> +
> + The GNU C Library is free software; you can redistribute it and/or
> + modify it under the terms of the GNU Lesser General Public
> + License as published by the Free Software Foundation; either
> + version 2.1 of the License, or (at your option) any later version.
> +
> + The GNU C Library is distributed in the hope that it will be useful,
> + but WITHOUT ANY WARRANTY; without even the implied warranty of
> + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
> + Lesser General Public License for more details.
> +
> + You should have received a copy of the GNU Lesser General Public
> + License along with the GNU C Library; if not, see
> + <https://www.gnu.org/licenses/>. */
>
> -#include <math.h>
> -#include <math_private.h>
> #include <libm-alias-finite.h>
> +#include <math.h>
> +#include "math_config.h"
> +
> +/* With x = mx * 2^ex and y = my * 2^ey (mx, my, ex, ey being integers), the
> + simplest implementation is:
> +
> + mx * 2^ex == 2 * mx * 2^(ex - 1)
>
> -static const float one = 1.0, Zero[] = {0.0, -0.0,};
> + while (ex > ey)
> + {
> + mx *= 2;
> + --ex;
> + mx %= my;
> + }
> +
> + With mx/my being mantissa of double floating pointer, on each step the
> + argument reduction can be improved 11 (which is sizeo of uint64_t minus
> + MANTISSA_WIDTH plus the signal bit):
> +
> + while (ex > ey)
> + {
> + mx << 11;
> + ex -= 11;
> + mx %= my;
> + } */
>
> float
> __ieee754_fmodf (float x, float y)
> {
> - int32_t n,hx,hy,hz,ix,iy,sx,i;
> -
> - GET_FLOAT_WORD(hx,x);
> - GET_FLOAT_WORD(hy,y);
> - sx = hx&0x80000000; /* sign of x */
> - hx ^=sx; /* |x| */
> - hy &= 0x7fffffff; /* |y| */
> -
> - /* purge off exception values */
> - if(hy==0||(hx>=0x7f800000)|| /* y=0,or x not finite */
> - (hy>0x7f800000)) /* or y is NaN */
> - return (x*y)/(x*y);
> - if(hx<hy) return x; /* |x|<|y| return x */
> - if(hx==hy)
> - return Zero[(uint32_t)sx>>31]; /* |x|=|y| return x*0*/
> -
> - /* determine ix = ilogb(x) */
> - if(hx<0x00800000) { /* subnormal x */
> - for (ix = -126,i=(hx<<8); i>0; i<<=1) ix -=1;
> - } else ix = (hx>>23)-127;
> -
> - /* determine iy = ilogb(y) */
> - if(hy<0x00800000) { /* subnormal y */
> - for (iy = -126,i=(hy<<8); i>=0; i<<=1) iy -=1;
> - } else iy = (hy>>23)-127;
> -
> - /* set up {hx,lx}, {hy,ly} and align y to x */
> - if(ix >= -126)
> - hx = 0x00800000|(0x007fffff&hx);
> - else { /* subnormal x, shift x to normal */
> - n = -126-ix;
> - hx = hx<<n;
> - }
> - if(iy >= -126)
> - hy = 0x00800000|(0x007fffff&hy);
> - else { /* subnormal y, shift y to normal */
> - n = -126-iy;
> - hy = hy<<n;
> - }
> -
> - /* fix point fmod */
> - n = ix - iy;
> - while(n--) {
> - hz=hx-hy;
> - if(hz<0){hx = hx+hx;}
> - else {
> - if(hz==0) /* return sign(x)*0 */
> - return Zero[(uint32_t)sx>>31];
> - hx = hz+hz;
> - }
> - }
> - hz=hx-hy;
> - if(hz>=0) {hx=hz;}
> -
> - /* convert back to floating value and restore the sign */
> - if(hx==0) /* return sign(x)*0 */
> - return Zero[(uint32_t)sx>>31];
> - while(hx<0x00800000) { /* normalize x */
> - hx = hx+hx;
> - iy -= 1;
> - }
> - if(iy>= -126) { /* normalize output */
> - hx = ((hx-0x00800000)|((iy+127)<<23));
> - SET_FLOAT_WORD(x,hx|sx);
> - } else { /* subnormal output */
> - n = -126 - iy;
> - hx >>= n;
> - SET_FLOAT_WORD(x,hx|sx);
> - x *= one; /* create necessary signal */
> - }
> - return x; /* exact output */
> + uint32_t hx = asuint (x);
> + uint32_t hy = asuint (y);
> +
> + uint32_t sx = hx & SIGN_MASK;
> + /* Get |x| and |y|. */
> + hx ^= sx;
> + hy &= ~SIGN_MASK;
> +
> + /* Special cases:
> + - If x or y is a Nan, NaN is returned.
> + - If x is an inifinity, a NaN is returned.
> + - If y is zero, Nan is returned.
> + - If x is +0/-0, and y is not zero, +0/-0 is returned. */
> + if (__glibc_unlikely (hy == 0 || hx >= EXPONENT_MASK || hy > EXPONENT_MASK))
> + return (x * y) / (x * y);
> +
> + if (__glibc_unlikely (hx <= hy))
> + {
> + if (hx < hy)
> + return x;
> + return sx ? -0.0 : 0.0;
> + }
> +
> + int ex = get_unbiased_exponent (hx);
> + int ey = get_unbiased_exponent (hy);
> +
> + /* Common case where exponents are close: ey >= -103 and |x/y| < 2^8, */
> + if (__glibc_likely (ey > MANTISSA_WIDTH && ex - ey <= EXPONENT_WIDTH))
> + {
> + uint32_t mx = get_explicit_mantissa (hx);
> + uint32_t my = get_explicit_mantissa (hy);
> +
> + uint32_t d = (ex == ey) ? (mx - my) : (mx << (ex - ey)) % my;
> + if (d == 0)
> + return 0.0;
> + return make_float (d, ey - 1, sx);
> + }
> +
> + /* Special case, both x and y are subnormal. */
> + if (__glibc_unlikely (ex == 0 && ey == 0))
> + return asfloat (hx % hy);
> +
> + /* Convert |x| and |y| to 'mx + 2^ex' and 'my + 2^ey'. Assume that hx is
> + not subnormal by conditions above. */
> + uint32_t mx = get_explicit_mantissa (hx);
> + ex--;
> +
> + uint32_t my = get_explicit_mantissa (hy);
> + int lead_zeros_my = EXPONENT_WIDTH;
> + if (__glibc_likely (ey > 0))
> + ey--;
> + else
> + {
> + my = get_mantissa (hy);
> + lead_zeros_my = __builtin_clz (my);
> + }
> +
> + int tail_zeros_my = __builtin_ctz (my);
> + int sides_zeroes = lead_zeros_my + tail_zeros_my;
> + int exp_diff = ex - ey;
> +
> + int right_shift = exp_diff < tail_zeros_my ? exp_diff : tail_zeros_my;
> + my >>= right_shift;
> + exp_diff -= right_shift;
> + ey += right_shift;
> +
> + int left_shift = exp_diff < EXPONENT_WIDTH ? exp_diff : EXPONENT_WIDTH;
> + mx <<= left_shift;
> + exp_diff -= left_shift;
> +
> + mx %= my;
> +
> + if (__glibc_unlikely (mx == 0))
> + return sx ? -0.0 : 0.0;
> +
> + if (exp_diff == 0)
> + return make_float (my, ey, sx);
> +
> + /* Assume modulo/divide operation is slow, so use multiplication with invert
> + values. */
> + uint32_t inv_hy = UINT32_MAX / my;
> + while (exp_diff > sides_zeroes) {
> + exp_diff -= sides_zeroes;
> + uint32_t hd = (mx * inv_hy) >> (BIT_WIDTH - sides_zeroes);
> + mx <<= sides_zeroes;
> + mx -= hd * my;
> + while (__glibc_unlikely (mx > my))
> + mx -= my;
> + }
> + uint32_t hd = (mx * inv_hy) >> (BIT_WIDTH - exp_diff);
> + mx <<= exp_diff;
> + mx -= hd * my;
> + while (__glibc_unlikely (mx > my))
> + mx -= my;
> +
> + return make_float (mx, ey, sx);
> }
> libm_alias_finite (__ieee754_fmodf, __fmodf)
> diff --git a/sysdeps/ieee754/flt-32/math_config.h b/sysdeps/ieee754/flt-32/math_config.h
> index 23045f59d6..cdab3a36ef 100644
> --- a/sysdeps/ieee754/flt-32/math_config.h
> +++ b/sysdeps/ieee754/flt-32/math_config.h
> @@ -110,6 +110,95 @@ issignalingf_inline (float x)
> return 2 * (ix ^ 0x00400000) > 2 * 0x7fc00000UL;
> }
>
> +#define BIT_WIDTH 32
> +#define MANTISSA_WIDTH 23
> +#define EXPONENT_WIDTH 8
> +#define MANTISSA_MASK 0x007fffff
> +#define EXPONENT_MASK 0x7f800000
> +#define EXP_MANT_MASK 0x7fffffff
> +#define QUIET_NAN_MASK 0x00400000
> +#define SIGN_MASK 0x80000000
> +
> +static inline bool
> +is_nan (uint32_t x)
> +{
> + return (x & EXP_MANT_MASK) > EXPONENT_MASK;
> +}
> +
> +static inline bool
> +is_quiet_nan (uint32_t x)
> +{
> + return (x & EXP_MANT_MASK) == (EXPONENT_MASK | QUIET_NAN_MASK);
> +}
> +
> +static inline bool
> +is_inf_or_nan (uint32_t x)
> +{
> + return (x & EXPONENT_MASK) == EXPONENT_MASK;
> +}
> +
> +static inline uint16_t
> +get_unbiased_exponent (uint32_t x)
> +{
> + return (x & EXPONENT_MASK) >> MANTISSA_WIDTH;
> +}
> +
> +/* Return mantissa with the implicit bit set iff X is a normal number. */
> +static inline uint32_t
> +get_explicit_mantissa (uint32_t x)
> +{
> + uint32_t p1 = (get_unbiased_exponent (x) > 0 && !is_inf_or_nan (x)
> + ? (MANTISSA_MASK + 1) : 0);
> + uint32_t p2 = (x & MANTISSA_MASK);
> + return p1 | p2;
> +}
> +
> +static inline uint32_t
> +set_mantissa (uint32_t x, uint32_t m)
> +{
> + m &= MANTISSA_MASK;
> + x &= ~(MANTISSA_MASK);
> + return x |= m;
> +}
> +
> +static inline uint32_t
> +get_mantissa (uint32_t x)
> +{
> + return x & MANTISSA_MASK;
> +}
> +
> +static inline uint32_t
> +set_unbiased_exponent (uint32_t x, uint32_t e)
> +{
> + e = (e << MANTISSA_WIDTH) & EXPONENT_MASK;
> + x &= ~(EXPONENT_MASK);
> + return x |= e;
> +}
> +
> +/* Convert integer number X, unbiased exponent EP, and sign S to double:
> +
> + result = X * 2^(EP+1 - exponent_bias)
> +
> + NB: zero is not supported. */
> +static inline double
> +make_float (uint32_t x, int ep, uint32_t s)
> +{
> + int lz = __builtin_clz (x) - EXPONENT_WIDTH;
> + x <<= lz;
> + ep -= lz;
> +
> + uint32_t r = 0;
> + if (__glibc_likely (ep >= 0))
> + {
> + r = set_mantissa (r, x);
> + r = set_unbiased_exponent (r, ep + 1);
> + }
> + else
> + r = set_mantissa (r, x >> -ep);
> +
> + return asfloat (r | s);
> +}
> +
> #define NOINLINE __attribute__ ((noinline))
>
> attribute_hidden float __math_oflowf (uint32_t);
> --
> 2.34.1
>
--
H.J.
On Fri, Mar 10, 2023 at 1:01 PM Adhemerval Zanella via Libc-alpha
<libc-alpha@sourceware.org> wrote:
>
> This uses a new algorithm similar to already proposed earlier [1].
> With x = mx * 2^ex and y = my * 2^ey (mx, my, ex, ey being integers),
> the simplest implementation is:
>
> mx * 2^ex == 2 * mx * 2^(ex - 1)
>
> while (ex > ey)
> {
> mx *= 2;
> --ex;
> mx %= my;
> }
>
> With mx/my being mantissa of double floating pointer, on each step the
> argument reduction can be improved 8 (which is sizeof of uint32_t minus
> MANTISSA_WIDTH plus the signal bit):
>
> while (ex > ey)
> {
> mx << 8;
> ex -= 8;
> mx %= my;
> } */
>
> The implementation uses builtin clz and ctz, along with shifts to
> convert hx/hy back to doubles. Different than the original patch,
> this path assume modulo/divide operation is slow, so use multiplication
> with invert values.
>
> I see the following performance improvements using fmod benchtests
> (result only show the 'mean' result):
>
> Architecture | Input | master | patch
> -----------------|-----------------|----------|--------
> x86_64 (Ryzen 9) | subnormals | 17.2549 | 12.3214
> x86_64 (Ryzen 9) | normal | 85.4096 | 52.6625
> x86_64 (Ryzen 9) | close-exponents | 19.1072 | 17.4622
> aarch64 (N1) | subnormal | 10.2182 | 6.81778
> aarch64 (N1) | normal | 60.0616 | 158.339
Is this line correct? 60 -> 158?
On 13/03/23 12:19, Matt Turner wrote:
> On Fri, Mar 10, 2023 at 1:01 PM Adhemerval Zanella via Libc-alpha
> <libc-alpha@sourceware.org> wrote:
>>
>> This uses a new algorithm similar to already proposed earlier [1].
>> With x = mx * 2^ex and y = my * 2^ey (mx, my, ex, ey being integers),
>> the simplest implementation is:
>>
>> mx * 2^ex == 2 * mx * 2^(ex - 1)
>>
>> while (ex > ey)
>> {
>> mx *= 2;
>> --ex;
>> mx %= my;
>> }
>>
>> With mx/my being mantissa of double floating pointer, on each step the
>> argument reduction can be improved 8 (which is sizeof of uint32_t minus
>> MANTISSA_WIDTH plus the signal bit):
>>
>> while (ex > ey)
>> {
>> mx << 8;
>> ex -= 8;
>> mx %= my;
>> } */
>>
>> The implementation uses builtin clz and ctz, along with shifts to
>> convert hx/hy back to doubles. Different than the original patch,
>> this path assume modulo/divide operation is slow, so use multiplication
>> with invert values.
>>
>> I see the following performance improvements using fmod benchtests
>> (result only show the 'mean' result):
>>
>> Architecture | Input | master | patch
>> -----------------|-----------------|----------|--------
>> x86_64 (Ryzen 9) | subnormals | 17.2549 | 12.3214
>> x86_64 (Ryzen 9) | normal | 85.4096 | 52.6625
>> x86_64 (Ryzen 9) | close-exponents | 19.1072 | 17.4622
>> aarch64 (N1) | subnormal | 10.2182 | 6.81778
>> aarch64 (N1) | normal | 60.0616 | 158.339
>
> Is this line correct? 60 -> 158?
Nops, it was an overlook from my part. I reran the benchmark to double
check it and the corrects numbers are:
Architecture | Input | master | patch
-----------------|-----------------|----------|--------
x86_64 (Ryzen 9) | subnormals | 17.2549 | 12.3214
x86_64 (Ryzen 9) | normal | 85.4096 | 52.6625
x86_64 (Ryzen 9) | close-exponents | 19.1072 | 17.4622
aarch64 (N1) | subnormal | 10.2182 | 6.81778
aarch64 (N1) | normal | 60.0616 | 21.2581
aarch64 (N1) | close-exponents | 11.5256 | 8.67894
Hi Adhemerval,
This looks good overall. I guess we still needs error handling and the wrapper
disabled since it spends 1/3 of the time in the useless wrapper but that can be
a separate patch.
A few comments below, there are a few things that look wrong, and I think most
of the helper functions just add extra complexity and branches for no gain:
+ uint32_t sx = hx & SIGN_MASK;
+ /* Get |x| and |y|. */
+ hx ^= sx;
+ hy &= ~SIGN_MASK;
+
+ /* Special cases:
+ - If x or y is a Nan, NaN is returned.
+ - If x is an inifinity, a NaN is returned.
+ - If y is zero, Nan is returned.
+ - If x is +0/-0, and y is not zero, +0/-0 is returned. */
+ if (__glibc_unlikely (hy == 0 || hx >= EXPONENT_MASK || hy > EXPONENT_MASK))
+ return (x * y) / (x * y);
+
+ if (__glibc_unlikely (hx <= hy))
+ {
+ if (hx < hy)
+ return x;
+ return sx ? -0.0 : 0.0;
This should be return asfloat (sx);
+ }
+
+ int ex = get_unbiased_exponent (hx);
+ int ey = get_unbiased_exponent (hy);
Should be hx >> MANTISSA_WIDTH since we cleared the sign bits (now we don't
need to add get_unbiased_exponent).
+ /* Common case where exponents are close: ey >= -103 and |x/y| < 2^8, */
+ if (__glibc_likely (ey > MANTISSA_WIDTH && ex - ey <= EXPONENT_WIDTH))
+ {
+ uint32_t mx = get_explicit_mantissa (hx);
+ uint32_t my = get_explicit_mantissa (hy);
Note this is equivalent to:
mx = (hx & MANTISSA_MASK) | (MANTISSA_MASK + 1);
So we don't need get_explicit_mantissa (or we could change it to do the above).
If we do this before the if statement, we don't need to repeat it below.
+ uint32_t d = (ex == ey) ? (mx - my) : (mx << (ex - ey)) % my;
+ if (d == 0)
+ return 0.0;
Looks like a bug, should be asfloat (sx)?
+ return make_float (d, ey - 1, sx);
+ }
+
+ /* Special case, both x and y are subnormal. */
+ if (__glibc_unlikely (ex == 0 && ey == 0))
+ return asfloat (hx % hy);
Similarly, shouldn't this be asfloat (sx | (hx % hy))?
+ /* Convert |x| and |y| to 'mx + 2^ex' and 'my + 2^ey'. Assume that hx is
+ not subnormal by conditions above. */
+ uint32_t mx = get_explicit_mantissa (hx);
+ ex--;
+
+ uint32_t my = get_explicit_mantissa (hy);
If we set mx/my above then this isn't needed.
+ int lead_zeros_my = EXPONENT_WIDTH;
+ if (__glibc_likely (ey > 0))
+ ey--;
+ else
+ {
+ my = get_mantissa (hy);
This is really my = hy; since we know hy is positive denormal.
+ lead_zeros_my = __builtin_clz (my);
+ }
+
+ int tail_zeros_my = __builtin_ctz (my);
+ int sides_zeroes = lead_zeros_my + tail_zeros_my;
+ int exp_diff = ex - ey;
+
+ int right_shift = exp_diff < tail_zeros_my ? exp_diff : tail_zeros_my;
+ my >>= right_shift;
+ exp_diff -= right_shift;
+ ey += right_shift;
+
+ int left_shift = exp_diff < EXPONENT_WIDTH ? exp_diff : EXPONENT_WIDTH;
+ mx <<= left_shift;
+ exp_diff -= left_shift;
+
+ mx %= my;
+
+ if (__glibc_unlikely (mx == 0))
+ return sx ? -0.0 : 0.0;
Should be asfloat (sx);
+ if (exp_diff == 0)
+ return make_float (my, ey, sx);
+
+ /* Assume modulo/divide operation is slow, so use multiplication with invert
+ values. */
+ uint32_t inv_hy = UINT32_MAX / my;
+ while (exp_diff > sides_zeroes) {
+ exp_diff -= sides_zeroes;
+ uint32_t hd = (mx * inv_hy) >> (BIT_WIDTH - sides_zeroes);
+ mx <<= sides_zeroes;
+ mx -= hd * my;
+ while (__glibc_unlikely (mx > my))
+ mx -= my;
+ }
+ uint32_t hd = (mx * inv_hy) >> (BIT_WIDTH - exp_diff);
+ mx <<= exp_diff;
+ mx -= hd * my;
+ while (__glibc_unlikely (mx > my))
+ mx -= my;
+
+ return make_float (mx, ey, sx);
}
libm_alias_finite (__ieee754_fmodf, __fmodf)
diff --git a/sysdeps/ieee754/flt-32/math_config.h b/sysdeps/ieee754/flt-32/math_config.h
index 23045f59d6..cdab3a36ef 100644
--- a/sysdeps/ieee754/flt-32/math_config.h
+++ b/sysdeps/ieee754/flt-32/math_config.h
@@ -110,6 +110,95 @@ issignalingf_inline (float x)
return 2 * (ix ^ 0x00400000) > 2 * 0x7fc00000UL;
}
+#define BIT_WIDTH 32
+#define MANTISSA_WIDTH 23
+#define EXPONENT_WIDTH 8
+#define MANTISSA_MASK 0x007fffff
+#define EXPONENT_MASK 0x7f800000
+#define EXP_MANT_MASK 0x7fffffff
+#define QUIET_NAN_MASK 0x00400000
+#define SIGN_MASK 0x80000000
+
+static inline bool
+is_nan (uint32_t x)
+{
+ return (x & EXP_MANT_MASK) > EXPONENT_MASK;
+}
+
+static inline bool
+is_quiet_nan (uint32_t x)
+{
+ return (x & EXP_MANT_MASK) == (EXPONENT_MASK | QUIET_NAN_MASK);
+}
+
+static inline bool
+is_inf_or_nan (uint32_t x)
+{
+ return (x & EXPONENT_MASK) == EXPONENT_MASK;
+}
+
+static inline uint16_t
+get_unbiased_exponent (uint32_t x)
+{
+ return (x & EXPONENT_MASK) >> MANTISSA_WIDTH;
+}
+
+/* Return mantissa with the implicit bit set iff X is a normal number. */
+static inline uint32_t
+get_explicit_mantissa (uint32_t x)
+{
+ uint32_t p1 = (get_unbiased_exponent (x) > 0 && !is_inf_or_nan (x)
+ ? (MANTISSA_MASK + 1) : 0);
+ uint32_t p2 = (x & MANTISSA_MASK);
+ return p1 | p2;
+}
I don't think we need this (and anything called by it).
+static inline uint32_t
+set_mantissa (uint32_t x, uint32_t m)
+{
+ m &= MANTISSA_MASK;
+ x &= ~(MANTISSA_MASK);
+ return x |= m;
+}
+
+static inline uint32_t
+get_mantissa (uint32_t x)
+{
+ return x & MANTISSA_MASK;
+}
+
+static inline uint32_t
+set_unbiased_exponent (uint32_t x, uint32_t e)
+{
+ e = (e << MANTISSA_WIDTH) & EXPONENT_MASK;
+ x &= ~(EXPONENT_MASK);
+ return x |= e;
+}
+
+/* Convert integer number X, unbiased exponent EP, and sign S to double:
+
+ result = X * 2^(EP+1 - exponent_bias)
+
+ NB: zero is not supported. */
+static inline double
+make_float (uint32_t x, int ep, uint32_t s)
+{
+ int lz = __builtin_clz (x) - EXPONENT_WIDTH;
+ x <<= lz;
+ ep -= lz;
+
+ uint32_t r = 0;
+ if (__glibc_likely (ep >= 0))
+ {
+ r = set_mantissa (r, x);
+ r = set_unbiased_exponent (r, ep + 1);
+ }
+ else
+ r = set_mantissa (r, x >> -ep);
+
+ return asfloat (r | s);
+}
This is overly complex, make_float is as trivial as:
static inline double
make_float (uint32_t x, int ep, uint32_t s)
{
int lz = __builtin_clz (x) - EXPONENT_WIDTH;
x <<= lz;
ep -= lz;
if (__glibc_unlikely (ep < 0))
{
x >> -ep;
ep = 0;
}
return asfloat (s + x + (ep << MANTISSA_WIDTH));
}
And now you don't need set_unbiased_exponent and set_mantissa either.
Cheers,
Wilco
On 14/03/23 13:42, Wilco Dijkstra wrote:
> Hi Adhemerval,
>
> This looks good overall. I guess we still needs error handling and the wrapper
> disabled since it spends 1/3 of the time in the useless wrapper but that can be
> a separate patch.
The wrapper should be simple to add.
>
> A few comments below, there are a few things that look wrong, and I think most
> of the helper functions just add extra complexity and branches for no gain:
>
>
> + uint32_t sx = hx & SIGN_MASK;
> + /* Get |x| and |y|. */
> + hx ^= sx;
> + hy &= ~SIGN_MASK;
> +
> + /* Special cases:
> + - If x or y is a Nan, NaN is returned.
> + - If x is an inifinity, a NaN is returned.
> + - If y is zero, Nan is returned.
> + - If x is +0/-0, and y is not zero, +0/-0 is returned. */
> + if (__glibc_unlikely (hy == 0 || hx >= EXPONENT_MASK || hy > EXPONENT_MASK))
> + return (x * y) / (x * y);
> +
> + if (__glibc_unlikely (hx <= hy))
> + {
> + if (hx < hy)
> + return x;
> + return sx ? -0.0 : 0.0;
>
> This should be return asfloat (sx);
Ack.
>
> + }
> +
> + int ex = get_unbiased_exponent (hx);
> + int ey = get_unbiased_exponent (hy);
>
> Should be hx >> MANTISSA_WIDTH since we cleared the sign bits (now we don't
> need to add get_unbiased_exponent).
Ack.
>
> + /* Common case where exponents are close: ey >= -103 and |x/y| < 2^8, */
> + if (__glibc_likely (ey > MANTISSA_WIDTH && ex - ey <= EXPONENT_WIDTH))
> + {
> + uint32_t mx = get_explicit_mantissa (hx);
> + uint32_t my = get_explicit_mantissa (hy);
>
> Note this is equivalent to:
>
> mx = (hx & MANTISSA_MASK) | (MANTISSA_MASK + 1);
Ack.
>
> So we don't need get_explicit_mantissa (or we could change it to do the above).
> If we do this before the if statement, we don't need to repeat it below.
>
> + uint32_t d = (ex == ey) ? (mx - my) : (mx << (ex - ey)) % my;
> + if (d == 0)
> + return 0.0;
>
> Looks like a bug, should be asfloat (sx)?
In fact I think this check is not really required since we already if
inputs are equal.
>
> + return make_float (d, ey - 1, sx);
> + }
> +
> + /* Special case, both x and y are subnormal. */
> + if (__glibc_unlikely (ex == 0 && ey == 0))
> + return asfloat (hx % hy);
>
> Similarly, shouldn't this be asfloat (sx | (hx % hy))?
Yes, I will add tests to check for it.
>
> + /* Convert |x| and |y| to 'mx + 2^ex' and 'my + 2^ey'. Assume that hx is
> + not subnormal by conditions above. */
> + uint32_t mx = get_explicit_mantissa (hx);
> + ex--;
> +
> + uint32_t my = get_explicit_mantissa (hy);
>
> If we set mx/my above then this isn't needed.
Right, I think we can mask out the mantissa and set the implicit bit then.
>
> + int lead_zeros_my = EXPONENT_WIDTH;
> + if (__glibc_likely (ey > 0))
> + ey--;
> + else
> + {
> + my = get_mantissa (hy);
>
> This is really my = hy; since we know hy is positive denormal.
Indeed.
>
> + lead_zeros_my = __builtin_clz (my);
> + }
> +
> + int tail_zeros_my = __builtin_ctz (my);
> + int sides_zeroes = lead_zeros_my + tail_zeros_my;
> + int exp_diff = ex - ey;
> +
> + int right_shift = exp_diff < tail_zeros_my ? exp_diff : tail_zeros_my;
> + my >>= right_shift;
> + exp_diff -= right_shift;
> + ey += right_shift;
> +
> + int left_shift = exp_diff < EXPONENT_WIDTH ? exp_diff : EXPONENT_WIDTH;
> + mx <<= left_shift;
> + exp_diff -= left_shift;
> +
> + mx %= my;
> +
> + if (__glibc_unlikely (mx == 0))
> + return sx ? -0.0 : 0.0;
>
> Should be asfloat (sx);
Ack.
>
> + if (exp_diff == 0)
> + return make_float (my, ey, sx);
> +
> + /* Assume modulo/divide operation is slow, so use multiplication with invert
> + values. */
> + uint32_t inv_hy = UINT32_MAX / my;
> + while (exp_diff > sides_zeroes) {
> + exp_diff -= sides_zeroes;
> + uint32_t hd = (mx * inv_hy) >> (BIT_WIDTH - sides_zeroes);
> + mx <<= sides_zeroes;
> + mx -= hd * my;
> + while (__glibc_unlikely (mx > my))
> + mx -= my;
> + }
> + uint32_t hd = (mx * inv_hy) >> (BIT_WIDTH - exp_diff);
> + mx <<= exp_diff;
> + mx -= hd * my;
> + while (__glibc_unlikely (mx > my))
> + mx -= my;
> +
> + return make_float (mx, ey, sx);
> }
> libm_alias_finite (__ieee754_fmodf, __fmodf)
> diff --git a/sysdeps/ieee754/flt-32/math_config.h b/sysdeps/ieee754/flt-32/math_config.h
> index 23045f59d6..cdab3a36ef 100644
> --- a/sysdeps/ieee754/flt-32/math_config.h
> +++ b/sysdeps/ieee754/flt-32/math_config.h
> @@ -110,6 +110,95 @@ issignalingf_inline (float x)
> return 2 * (ix ^ 0x00400000) > 2 * 0x7fc00000UL;
> }
>
> +#define BIT_WIDTH 32
> +#define MANTISSA_WIDTH 23
> +#define EXPONENT_WIDTH 8
> +#define MANTISSA_MASK 0x007fffff
> +#define EXPONENT_MASK 0x7f800000
> +#define EXP_MANT_MASK 0x7fffffff
> +#define QUIET_NAN_MASK 0x00400000
> +#define SIGN_MASK 0x80000000
> +
> +static inline bool
> +is_nan (uint32_t x)
> +{
> + return (x & EXP_MANT_MASK) > EXPONENT_MASK;
> +}
> +
> +static inline bool
> +is_quiet_nan (uint32_t x)
> +{
> + return (x & EXP_MANT_MASK) == (EXPONENT_MASK | QUIET_NAN_MASK);
> +}
> +
> +static inline bool
> +is_inf_or_nan (uint32_t x)
> +{
> + return (x & EXPONENT_MASK) == EXPONENT_MASK;
> +}
> +
> +static inline uint16_t
> +get_unbiased_exponent (uint32_t x)
> +{
> + return (x & EXPONENT_MASK) >> MANTISSA_WIDTH;
> +}
> +
> +/* Return mantissa with the implicit bit set iff X is a normal number. */
> +static inline uint32_t
> +get_explicit_mantissa (uint32_t x)
> +{
> + uint32_t p1 = (get_unbiased_exponent (x) > 0 && !is_inf_or_nan (x)
> + ? (MANTISSA_MASK + 1) : 0);
> + uint32_t p2 = (x & MANTISSA_MASK);
> + return p1 | p2;
> +}
>
> I don't think we need this (and anything called by it).
>
> +static inline uint32_t
> +set_mantissa (uint32_t x, uint32_t m)
> +{
> + m &= MANTISSA_MASK;
> + x &= ~(MANTISSA_MASK);
> + return x |= m;
> +}
> +
> +static inline uint32_t
> +get_mantissa (uint32_t x)
> +{
> + return x & MANTISSA_MASK;
> +}
> +
> +static inline uint32_t
> +set_unbiased_exponent (uint32_t x, uint32_t e)
> +{
> + e = (e << MANTISSA_WIDTH) & EXPONENT_MASK;
> + x &= ~(EXPONENT_MASK);
> + return x |= e;
> +}
> +
> +/* Convert integer number X, unbiased exponent EP, and sign S to double:
> +
> + result = X * 2^(EP+1 - exponent_bias)
> +
> + NB: zero is not supported. */
> +static inline double
> +make_float (uint32_t x, int ep, uint32_t s)
> +{
> + int lz = __builtin_clz (x) - EXPONENT_WIDTH;
> + x <<= lz;
> + ep -= lz;
> +
> + uint32_t r = 0;
> + if (__glibc_likely (ep >= 0))
> + {
> + r = set_mantissa (r, x);
> + r = set_unbiased_exponent (r, ep + 1);
> + }
> + else
> + r = set_mantissa (r, x >> -ep);
> +
> + return asfloat (r | s);
> +}
>
> This is overly complex, make_float is as trivial as:
>
> static inline double
> make_float (uint32_t x, int ep, uint32_t s)
> {
> int lz = __builtin_clz (x) - EXPONENT_WIDTH;
> x <<= lz;
> ep -= lz;
>
> if (__glibc_unlikely (ep < 0))
> {
> x >> -ep;
> ep = 0;
> }
> return asfloat (s + x + (ep << MANTISSA_WIDTH));
> }
>
> And now you don't need set_unbiased_exponent and set_mantissa either.
Indeed it is simpler, thanks.
>
> Cheers,
> Wilco
@@ -1,102 +1,146 @@
-/* e_fmodf.c -- float version of e_fmod.c.
- */
-
-/*
- * ====================================================
- * Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved.
- *
- * Developed at SunPro, a Sun Microsystems, Inc. business.
- * Permission to use, copy, modify, and distribute this
- * software is freely granted, provided that this notice
- * is preserved.
- * ====================================================
- */
-
-/*
- * __ieee754_fmodf(x,y)
- * Return x mod y in exact arithmetic
- * Method: shift and subtract
- */
+/* Floating-point remainder function.
+ Copyright (C) 2023 Free Software Foundation, Inc.
+ This file is part of the GNU C Library.
+
+ The GNU C Library is free software; you can redistribute it and/or
+ modify it under the terms of the GNU Lesser General Public
+ License as published by the Free Software Foundation; either
+ version 2.1 of the License, or (at your option) any later version.
+
+ The GNU C Library is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ Lesser General Public License for more details.
+
+ You should have received a copy of the GNU Lesser General Public
+ License along with the GNU C Library; if not, see
+ <https://www.gnu.org/licenses/>. */
-#include <math.h>
-#include <math_private.h>
#include <libm-alias-finite.h>
+#include <math.h>
+#include "math_config.h"
+
+/* With x = mx * 2^ex and y = my * 2^ey (mx, my, ex, ey being integers), the
+ simplest implementation is:
+
+ mx * 2^ex == 2 * mx * 2^(ex - 1)
-static const float one = 1.0, Zero[] = {0.0, -0.0,};
+ while (ex > ey)
+ {
+ mx *= 2;
+ --ex;
+ mx %= my;
+ }
+
+ With mx/my being mantissa of double floating pointer, on each step the
+ argument reduction can be improved 11 (which is sizeo of uint64_t minus
+ MANTISSA_WIDTH plus the signal bit):
+
+ while (ex > ey)
+ {
+ mx << 11;
+ ex -= 11;
+ mx %= my;
+ } */
float
__ieee754_fmodf (float x, float y)
{
- int32_t n,hx,hy,hz,ix,iy,sx,i;
-
- GET_FLOAT_WORD(hx,x);
- GET_FLOAT_WORD(hy,y);
- sx = hx&0x80000000; /* sign of x */
- hx ^=sx; /* |x| */
- hy &= 0x7fffffff; /* |y| */
-
- /* purge off exception values */
- if(hy==0||(hx>=0x7f800000)|| /* y=0,or x not finite */
- (hy>0x7f800000)) /* or y is NaN */
- return (x*y)/(x*y);
- if(hx<hy) return x; /* |x|<|y| return x */
- if(hx==hy)
- return Zero[(uint32_t)sx>>31]; /* |x|=|y| return x*0*/
-
- /* determine ix = ilogb(x) */
- if(hx<0x00800000) { /* subnormal x */
- for (ix = -126,i=(hx<<8); i>0; i<<=1) ix -=1;
- } else ix = (hx>>23)-127;
-
- /* determine iy = ilogb(y) */
- if(hy<0x00800000) { /* subnormal y */
- for (iy = -126,i=(hy<<8); i>=0; i<<=1) iy -=1;
- } else iy = (hy>>23)-127;
-
- /* set up {hx,lx}, {hy,ly} and align y to x */
- if(ix >= -126)
- hx = 0x00800000|(0x007fffff&hx);
- else { /* subnormal x, shift x to normal */
- n = -126-ix;
- hx = hx<<n;
- }
- if(iy >= -126)
- hy = 0x00800000|(0x007fffff&hy);
- else { /* subnormal y, shift y to normal */
- n = -126-iy;
- hy = hy<<n;
- }
-
- /* fix point fmod */
- n = ix - iy;
- while(n--) {
- hz=hx-hy;
- if(hz<0){hx = hx+hx;}
- else {
- if(hz==0) /* return sign(x)*0 */
- return Zero[(uint32_t)sx>>31];
- hx = hz+hz;
- }
- }
- hz=hx-hy;
- if(hz>=0) {hx=hz;}
-
- /* convert back to floating value and restore the sign */
- if(hx==0) /* return sign(x)*0 */
- return Zero[(uint32_t)sx>>31];
- while(hx<0x00800000) { /* normalize x */
- hx = hx+hx;
- iy -= 1;
- }
- if(iy>= -126) { /* normalize output */
- hx = ((hx-0x00800000)|((iy+127)<<23));
- SET_FLOAT_WORD(x,hx|sx);
- } else { /* subnormal output */
- n = -126 - iy;
- hx >>= n;
- SET_FLOAT_WORD(x,hx|sx);
- x *= one; /* create necessary signal */
- }
- return x; /* exact output */
+ uint32_t hx = asuint (x);
+ uint32_t hy = asuint (y);
+
+ uint32_t sx = hx & SIGN_MASK;
+ /* Get |x| and |y|. */
+ hx ^= sx;
+ hy &= ~SIGN_MASK;
+
+ /* Special cases:
+ - If x or y is a Nan, NaN is returned.
+ - If x is an inifinity, a NaN is returned.
+ - If y is zero, Nan is returned.
+ - If x is +0/-0, and y is not zero, +0/-0 is returned. */
+ if (__glibc_unlikely (hy == 0 || hx >= EXPONENT_MASK || hy > EXPONENT_MASK))
+ return (x * y) / (x * y);
+
+ if (__glibc_unlikely (hx <= hy))
+ {
+ if (hx < hy)
+ return x;
+ return sx ? -0.0 : 0.0;
+ }
+
+ int ex = get_unbiased_exponent (hx);
+ int ey = get_unbiased_exponent (hy);
+
+ /* Common case where exponents are close: ey >= -103 and |x/y| < 2^8, */
+ if (__glibc_likely (ey > MANTISSA_WIDTH && ex - ey <= EXPONENT_WIDTH))
+ {
+ uint32_t mx = get_explicit_mantissa (hx);
+ uint32_t my = get_explicit_mantissa (hy);
+
+ uint32_t d = (ex == ey) ? (mx - my) : (mx << (ex - ey)) % my;
+ if (d == 0)
+ return 0.0;
+ return make_float (d, ey - 1, sx);
+ }
+
+ /* Special case, both x and y are subnormal. */
+ if (__glibc_unlikely (ex == 0 && ey == 0))
+ return asfloat (hx % hy);
+
+ /* Convert |x| and |y| to 'mx + 2^ex' and 'my + 2^ey'. Assume that hx is
+ not subnormal by conditions above. */
+ uint32_t mx = get_explicit_mantissa (hx);
+ ex--;
+
+ uint32_t my = get_explicit_mantissa (hy);
+ int lead_zeros_my = EXPONENT_WIDTH;
+ if (__glibc_likely (ey > 0))
+ ey--;
+ else
+ {
+ my = get_mantissa (hy);
+ lead_zeros_my = __builtin_clz (my);
+ }
+
+ int tail_zeros_my = __builtin_ctz (my);
+ int sides_zeroes = lead_zeros_my + tail_zeros_my;
+ int exp_diff = ex - ey;
+
+ int right_shift = exp_diff < tail_zeros_my ? exp_diff : tail_zeros_my;
+ my >>= right_shift;
+ exp_diff -= right_shift;
+ ey += right_shift;
+
+ int left_shift = exp_diff < EXPONENT_WIDTH ? exp_diff : EXPONENT_WIDTH;
+ mx <<= left_shift;
+ exp_diff -= left_shift;
+
+ mx %= my;
+
+ if (__glibc_unlikely (mx == 0))
+ return sx ? -0.0 : 0.0;
+
+ if (exp_diff == 0)
+ return make_float (my, ey, sx);
+
+ /* Assume modulo/divide operation is slow, so use multiplication with invert
+ values. */
+ uint32_t inv_hy = UINT32_MAX / my;
+ while (exp_diff > sides_zeroes) {
+ exp_diff -= sides_zeroes;
+ uint32_t hd = (mx * inv_hy) >> (BIT_WIDTH - sides_zeroes);
+ mx <<= sides_zeroes;
+ mx -= hd * my;
+ while (__glibc_unlikely (mx > my))
+ mx -= my;
+ }
+ uint32_t hd = (mx * inv_hy) >> (BIT_WIDTH - exp_diff);
+ mx <<= exp_diff;
+ mx -= hd * my;
+ while (__glibc_unlikely (mx > my))
+ mx -= my;
+
+ return make_float (mx, ey, sx);
}
libm_alias_finite (__ieee754_fmodf, __fmodf)
@@ -110,6 +110,95 @@ issignalingf_inline (float x)
return 2 * (ix ^ 0x00400000) > 2 * 0x7fc00000UL;
}
+#define BIT_WIDTH 32
+#define MANTISSA_WIDTH 23
+#define EXPONENT_WIDTH 8
+#define MANTISSA_MASK 0x007fffff
+#define EXPONENT_MASK 0x7f800000
+#define EXP_MANT_MASK 0x7fffffff
+#define QUIET_NAN_MASK 0x00400000
+#define SIGN_MASK 0x80000000
+
+static inline bool
+is_nan (uint32_t x)
+{
+ return (x & EXP_MANT_MASK) > EXPONENT_MASK;
+}
+
+static inline bool
+is_quiet_nan (uint32_t x)
+{
+ return (x & EXP_MANT_MASK) == (EXPONENT_MASK | QUIET_NAN_MASK);
+}
+
+static inline bool
+is_inf_or_nan (uint32_t x)
+{
+ return (x & EXPONENT_MASK) == EXPONENT_MASK;
+}
+
+static inline uint16_t
+get_unbiased_exponent (uint32_t x)
+{
+ return (x & EXPONENT_MASK) >> MANTISSA_WIDTH;
+}
+
+/* Return mantissa with the implicit bit set iff X is a normal number. */
+static inline uint32_t
+get_explicit_mantissa (uint32_t x)
+{
+ uint32_t p1 = (get_unbiased_exponent (x) > 0 && !is_inf_or_nan (x)
+ ? (MANTISSA_MASK + 1) : 0);
+ uint32_t p2 = (x & MANTISSA_MASK);
+ return p1 | p2;
+}
+
+static inline uint32_t
+set_mantissa (uint32_t x, uint32_t m)
+{
+ m &= MANTISSA_MASK;
+ x &= ~(MANTISSA_MASK);
+ return x |= m;
+}
+
+static inline uint32_t
+get_mantissa (uint32_t x)
+{
+ return x & MANTISSA_MASK;
+}
+
+static inline uint32_t
+set_unbiased_exponent (uint32_t x, uint32_t e)
+{
+ e = (e << MANTISSA_WIDTH) & EXPONENT_MASK;
+ x &= ~(EXPONENT_MASK);
+ return x |= e;
+}
+
+/* Convert integer number X, unbiased exponent EP, and sign S to double:
+
+ result = X * 2^(EP+1 - exponent_bias)
+
+ NB: zero is not supported. */
+static inline double
+make_float (uint32_t x, int ep, uint32_t s)
+{
+ int lz = __builtin_clz (x) - EXPONENT_WIDTH;
+ x <<= lz;
+ ep -= lz;
+
+ uint32_t r = 0;
+ if (__glibc_likely (ep >= 0))
+ {
+ r = set_mantissa (r, x);
+ r = set_unbiased_exponent (r, ep + 1);
+ }
+ else
+ r = set_mantissa (r, x >> -ep);
+
+ return asfloat (r | s);
+}
+
#define NOINLINE __attribute__ ((noinline))
attribute_hidden float __math_oflowf (uint32_t);