[RFC] tgmath.h and math/Makefile refactor
Commit Message
These are two separate smaller patches, inching towards the goal of
supporting more FP types in libm.
I spent some time trying to figure out tgmath.h, and how it might look
with support for arbitrary types. It assumes __builtin_types_compatible_p
and Statement Expr's are supported by GCC (I'm not sure when these were
added). These macros might make a good basis for supporting the
classification macros in math.h.
The potential Makefile changes by comparison are rather self-explanatory.
Comments
On Fri, 3 Jun 2016, Paul E. Murphy wrote:
> I spent some time trying to figure out tgmath.h, and how it might look
> with support for arbitrary types. It assumes __builtin_types_compatible_p
> and Statement Expr's are supported by GCC (I'm not sure when these were
> added). These macros might make a good basis for supporting the
> classification macros in math.h.
Assuming that is not a good idea.
(a) We support use of compilers back to GCC 2.7 with glibc headers;
__builtin_types_compatible_p is more recent than that (GCC 3.1).
(b) Using statement expressions in a macro means it can't be used outside
of function definitions (in sizeof etc.); while it would be desirable to
fix that in GCC, compatibility with existing GCC means it's best not to
add them to these macros if avoidable.
So conditional expressions should be used instead of statement
expressions, and __builtin_types_compatible_p only when supported (you
*can* assume it's supported if __float128 is).
> The potential Makefile changes by comparison are rather self-explanatory.
I don't think the division into ts-18661-calls and others is a good idea
at all. It may make sense to have a separate variable for obsolete
functions that should not be added for new types, but most of the
functions you list in gnu-libm-calls are not obsolete functions, and
several would break the build if you didn't include them for new
floating-point types. e_j0F e_j1F e_jnF should be included for new types
for feature parity with long double; e_rem_pio2F is required for range
reduction for trigonometric functions; w_j0F w_j1F w_jnF go with e_j0F
e_j1F e_jnF; s_sincosF s_clog10F should be included as GNU extensions for
new types just as for existing types; x2y2m1F gamma_productF lgamma_negF
lgamma_productF are parts of the implementations of other functions and
are required for all types where those functions are present.
You need to make an actual proposal regarding which functions present in
the API for long double should or should not have float128 analogues -
going through all functions in glibc with long double in their prototypes,
and classifying them appropriately.
Also, the currently documented minimum GNU make version is 3.79. You're
using a feature, $(eval), that's new in 3.80. If you wish to do so,
you'll need to propose an increase in the minimum version separately.
On Fri, 3 Jun 2016, Joseph Myers wrote:
> I don't think the division into ts-18661-calls and others is a good idea
> at all. It may make sense to have a separate variable for obsolete
> functions that should not be added for new types, but most of the
To expand further on this:
Even where a function should not be exported for new types, there's still
the issue of providing a version of it for long double if you support an
alternative long double type.
For example, say you support long double being binary128 on powerpc64le
(my understanding being that that's considered a desirable subsequent step
after the support for explicit *f128 APIs is done). Then the headers
would remap calls to long double functions to call the *f128 functions
instead[*]. And while for example scalb, taking two floating-point
arguments, is obsolescent and so no scalbf128 explicit API should be
provided, scalbl is still a supported API so should be provided for the
new long double variant - meaning that in that case you do still need to
build a version of scalb for binary128, just export it under a name such
as __scalbf128, not with a public scalbf128 name at API or ABI level.
There are still some files that are never useful for new types even when
those types are used as long double variants, e.g. w_lgamma_compat. But
maybe it's just that one, in which case perhaps it makes more sense just
to have an empty version of that file used for new types rather than
splitting things in the Makefile (and for other files such as scalb that
are conditionally used to have files with appropriate #if conditions
determining whether they generate any code).
[*] Strictly ISO C would require the remapping to go to __*f128 names with
those being exported as well, to stay within the ISO C namespace;
something to consider later.
On 06/06/2016 09:22 AM, Joseph Myers wrote:
> On Fri, 3 Jun 2016, Joseph Myers wrote:
>
>> I don't think the division into ts-18661-calls and others is a good idea
>> at all. It may make sense to have a separate variable for obsolete
>> functions that should not be added for new types, but most of the
>
> To expand further on this:
>
> Even where a function should not be exported for new types, there's still
> the issue of providing a version of it for long double if you support an
> alternative long double type.
>
> For example, say you support long double being binary128 on powerpc64le
> (my understanding being that that's considered a desirable subsequent step
> after the support for explicit *f128 APIs is done). Then the headers
> would remap calls to long double functions to call the *f128 functions
> instead[*]. And while for example scalb, taking two floating-point
> arguments, is obsolescent and so no scalbf128 explicit API should be
> provided, scalbl is still a supported API so should be provided for the
> new long double variant - meaning that in that case you do still need to
> build a version of scalb for binary128, just export it under a name such
> as __scalbf128, not with a public scalbf128 name at API or ABI level.
So should we be asking what functions that exist for the C99 types
should *not* be exported for float128. Looking at the current
ABI, the existing functions fall into a small set of categories:
Functions defined by TS 18661-3 which are already included within
glibc:
acosh acos asinh asin
atan2 atanh atan cabs
cacosh cacos carg casinh
casin catanh catan cbrt
ccosh ccos ceil cexp
cimag clog conj copysign
cosh cos cpow cproj
creal csinh csin csqrt
ctanh ctan erfc erf
exp2 exp expm1 fabs
fdim floor fma fmax
fmin fmod frexp hypot
ilogb ldexp lgamma llrint
llround log10 log1p log2
logb log lrint lround
modf nanf nearbyint nextafter
pow remainder remquo rint
round scalbln scalbn significand
sinh sin sqrt tanh
tan tgamma trunc
Likewise, GNU specific ABI/API which is used to support the
above which should be exported and guarded with _GNU_SOURCE:
exp10 clog10 j0l j1l
jnl lgamma?_r pow10
sincos y0 y1 yn
Likewise, helper functions for the classification macros,
and likely support macros for transitioning long double:
__finite __fpclassify
__signbit __issignaling
Likewise, the following would have a matching
__*_finite ABI:
acosh acos asin atan2
atanh cosh exp10 exp2
exp fmod hypot j0
j1 jn log10 log2
log pow remainder scalb
sinh sqrt y0 y1
yn
Leaving us with a very small set of ABI/API which should
neither be defined nor exported as is, but may be
exposed in some capacity to support format transitions
of existing types:
drem nexttoward scalb finite gamma
> There are still some files that are never useful for new types even when
> those types are used as long double variants, e.g. w_lgamma_compat. But
> maybe it's just that one, in which case perhaps it makes more sense just
> to have an empty version of that file used for new types rather than
> splitting things in the Makefile (and for other files such as scalb that
> are conditionally used to have files with appropriate #if conditions
> determining whether they generate any code).
Assuming there is little objection to the mechanism I've
suggested for adding new types, these outliers can be added
to type-{double,ldouble,float}-routines variable (The usage
of eval can be trivially worked around).
>
>
> [*] Strictly ISO C would require the remapping to go to __*f128 names with
> those being exported as well, to stay within the ISO C namespace;
> something to consider later.
>
Yes, though I think that work should happen after we get the
initial float128 work done. That transition is going be
substantially more complex than adding float128.
On Mon, 6 Jun 2016, Paul E. Murphy wrote:
> So should we be asking what functions that exist for the C99 types
> should *not* be exported for float128. Looking at the current
> ABI, the existing functions fall into a small set of categories:
You need to consider libc functions as well.... Some get __float128
versions although not in TS 18661-3 (e.g. strtold_l), some don't (e.g.
qecvt etc. - obsolescent functions that would still need building for a
different long double format).
> Functions defined by TS 18661-3 which are already included within
> glibc:
> round scalbln scalbn significand
significand is not a TS 18661-3 function, and my inclination would be to
consider it among the obsolescent functions not added for new
floating-point types.
> Likewise, GNU specific ABI/API which is used to support the
> above which should be exported and guarded with _GNU_SOURCE:
>
> exp10 clog10 j0l j1l
> jnl lgamma?_r pow10
> sincos y0 y1 yn
I think pow10 should be considered an obsolete name for exp10, and not
exported for new types.
> Likewise, helper functions for the classification macros,
> and likely support macros for transitioning long double:
>
> __finite __fpclassify
> __signbit __issignaling
Plus some others that are in libc rather than libm (__isinf, __isnan).
__finite and __signbit are in both libraries. It's not clear that being
exported from both libraries rather than just one is desirable for new
types.
> Leaving us with a very small set of ABI/API which should
> neither be defined nor exported as is, but may be
> exposed in some capacity to support format transitions
> of existing types:
>
> drem nexttoward scalb finite gamma
Where drem and gamma are obviously aliases for other functions that *are*
exported for the new types, so supporting dreml and gammal for long double
= binary128 doesn't require building extra code (and as aliases, they
don't affect the makefiles either). Likewise finitel (alias for
__finitef128). And indeed nexttowardl (alias for nextafterf128) - but
nexttoward would instead need separate variants for float and double
paired with the new long double type, much like the existing variants for
long double = double. scalbl would involve building new code for the new
type when used as long double, that wouldn't be built when used only as
__float128; likewise significandl, if moved to this group of functions.
(Note that where functions are being considered obsolete, we should make
sure the manual documents them as such and says what API is preferred
instead. As I previously noted, it needs to be made to document
remainder, not drem, as the primary name for that function.)
> > There are still some files that are never useful for new types even when
> > those types are used as long double variants, e.g. w_lgamma_compat. But
> > maybe it's just that one, in which case perhaps it makes more sense just
> > to have an empty version of that file used for new types rather than
> > splitting things in the Makefile (and for other files such as scalb that
> > are conditionally used to have files with appropriate #if conditions
> > determining whether they generate any code).
>
> Assuming there is little objection to the mechanism I've
> suggested for adding new types, these outliers can be added
I'm not convinced the mechanism is sufficiently coherently defined yet. I
think any division of the list of functions into different classes, or
mechanism for sysdeps to add extra types, really needs justifying via
having a subsequent patch that builds on it to add working *f128 support.
(As opposed to refactoring how libm-calls is defined to avoid the special
%_rl -> %l_r hack, or reducing duplication of code between types in
math/Makefile, which can more readily be justified as cleanups on their
own merits.)
> to type-{double,ldouble,float}-routines variable (The usage
The question is not so much can it be done in one way or another, as
what's cleanest.
> of eval can be trivially worked around).
I don't think requiring a GNU make version supporting eval is a *problem*
- a move to 3.81 (probably) as minimum version just needs to be proposed
in a separate thread. (Versions newer than 3.81 would not be suitable to
require yet; it took a long time for make 4.x to be widely adopted, so
that e.g. Ubuntu 14.04 has 3.81.)
On Mon, 2016-06-06 at 22:50 +0000, Joseph Myers wrote:
> On Mon, 6 Jun 2016, Paul E. Murphy wrote:
>
> > So should we be asking what functions that exist for the C99 types
> > should *not* be exported for float128. Looking at the current
> > ABI, the existing functions fall into a small set of categories:
>
> You need to consider libc functions as well.... Some get __float128
> versions although not in TS 18661-3 (e.g. strtold_l), some don't (e.g.
> qecvt etc. - obsolescent functions that would still need building for a
> different long double format).
>
> > Functions defined by TS 18661-3 which are already included within
> > glibc:
>
> > round scalbln scalbn significand
>
> significand is not a TS 18661-3 function, and my inclination would be to
> consider it among the obsolescent functions not added for new
> floating-point types.
>
> > Likewise, GNU specific ABI/API which is used to support the
> > above which should be exported and guarded with _GNU_SOURCE:
> >
> > exp10 clog10 j0l j1l
> > jnl lgamma?_r pow10
> > sincos y0 y1 yn
>
> I think pow10 should be considered an obsolete name for exp10, and not
> exported for new types.
>
> > Likewise, helper functions for the classification macros,
> > and likely support macros for transitioning long double:
> >
> > __finite __fpclassify
> > __signbit __issignaling
>
> Plus some others that are in libc rather than libm (__isinf, __isnan).
>
> __finite and __signbit are in both libraries. It's not clear that being
> exported from both libraries rather than just one is desirable for new
> types.
>
> > Leaving us with a very small set of ABI/API which should
> > neither be defined nor exported as is, but may be
> > exposed in some capacity to support format transitions
> > of existing types:
> >
> > drem nexttoward scalb finite gamma
>
> Where drem and gamma are obviously aliases for other functions that *are*
> exported for the new types, so supporting dreml and gammal for long double
> = binary128 doesn't require building extra code (and as aliases, they
> don't affect the makefiles either). Likewise finitel (alias for
> __finitef128). And indeed nexttowardl (alias for nextafterf128) - but
> nexttoward would instead need separate variants for float and double
> paired with the new long double type, much like the existing variants for
> long double = double. scalbl would involve building new code for the new
> type when used as long double, that wouldn't be built when used only as
> __float128; likewise significandl, if moved to this group of functions.
>
> (Note that where functions are being considered obsolete, we should make
> sure the manual documents them as such and says what API is preferred
> instead. As I previously noted, it needs to be made to document
> remainder, not drem, as the primary name for that function.)
>
> > > There are still some files that are never useful for new types even when
> > > those types are used as long double variants, e.g. w_lgamma_compat. But
> > > maybe it's just that one, in which case perhaps it makes more sense just
> > > to have an empty version of that file used for new types rather than
> > > splitting things in the Makefile (and for other files such as scalb that
> > > are conditionally used to have files with appropriate #if conditions
> > > determining whether they generate any code).
> >
> > Assuming there is little objection to the mechanism I've
> > suggested for adding new types, these outliers can be added
>
> I'm not convinced the mechanism is sufficiently coherently defined yet. I
> think any division of the list of functions into different classes, or
> mechanism for sysdeps to add extra types, really needs justifying via
> having a subsequent patch that builds on it to add working *f128 support.
> (As opposed to refactoring how libm-calls is defined to avoid the special
> %_rl -> %l_r hack, or reducing duplication of code between types in
> math/Makefile, which can more readily be justified as cleanups on their
> own merits.)
I am not sure what you are saying here. The introduction of a new
standard is a complex task requiring interpretation, iteration and time.
Introducing a new type for the existing standards and implementation is
more straightforward effort.
I don't see a requirement for perfection in these early efforts for
enabling the new standards, because all of our understandings will
involve.
I also don't see a reason to hold up the effort to enable the addition
of new types (like _float128) and implementing the current API using
that type. The steps required to transition our platform to new (IEEE
standard) long double is already complex enough.
I would like to see a separation of concerns so that we can work both
efforts in parallel and also involve the larger community in the new
standards effort.
On Tue, 7 Jun 2016, Steven Munroe wrote:
> I am not sure what you are saying here.
A large number of individual, detailed technical comments each of which
should be studied and understood individually (in the context of careful
study of the existing glibc code and relevant standards) and all of which
should influence subsequent iterations of the proposals for APIs and ABIs
to be added and of the patches towards adding those APIs and ABIs (or be
discussed in the community with a view to reaching consensus if people
have specific disagreements with particular technical points). There
isn't a short summary because the patch does many things and so raises
many separate points about those separate things. And some of the things
it does are things that illustrate how a patch is premature because
higher-level consensus is still needed (e.g. on the set of APIs to support
for the new type).
> I would like to see a separation of concerns so that we can work both
> efforts in parallel and also involve the larger community in the new
> standards effort.
My comments include that changes that are simple refactorings relating to
support for the existing types can usefully be separated from those
relating to mechanisms for adding new types. The former can quite likely
be established as desirable cleanups on their own merits and go in without
knowing what the patches to add new types will look like, just like the
series of patches refactoring libm-test.inc (for example). The latter may
be harder to review without seeing the rest of the patch series that ends
up adding the new types.
I expect that the changes from this thread should end up in several
separately submitted patches, each doing just one minimal thing, and then
each of those will need reviewing individually.
On 06/06/2016 05:50 PM, Joseph Myers wrote:
> On Mon, 6 Jun 2016, Paul E. Murphy wrote:
>
>> So should we be asking what functions that exist for the C99 types
>> should *not* be exported for float128. Looking at the current
>> ABI, the existing functions fall into a small set of categories:
>
> You need to consider libc functions as well.... Some get __float128
> versions although not in TS 18661-3 (e.g. strtold_l), some don't (e.g.
> qecvt etc. - obsolescent functions that would still need building for a
> different long double format).
This list is exclusive to libm. I'm more concerned with what should
go in initially for float128. It is much easier to add functions than
remove them. I don't think there is value in enumerating all functions
which won't get a float128 analogue.
By tacit agreement, the community does not object to adding support for
new types and functioned outlined by TS 18661. For practical reasons,
we can only support a subset of it. Likewise, we might support a few
functions outside of TS 18661 where it eases support for standardized
functions, or attempts to resolve a deficiency of the standard.
Transitioning long double presents a unique set of challenges. We
shouldn't make it any more challenging for us. Any obvious changes
made to ease this work should be done upfront. But, we also want this
support in a reasonable timeframe.
>
>> Functions defined by TS 18661-3 which are already included within
>> glibc:
>
>> round scalbln scalbn significand
>
> significand is not a TS 18661-3 function, and my inclination would be to
> consider it among the obsolescent functions not added for new
> floating-point types.
Oops. Yes, that belongs with drem and friends.
>
>> Likewise, GNU specific ABI/API which is used to support the
>> above which should be exported and guarded with _GNU_SOURCE:
>>
>> exp10 clog10 j0l j1l
>> jnl lgamma?_r pow10
>> sincos y0 y1 yn
>
> I think pow10 should be considered an obsolete name for exp10, and not
> exported for new types.
Ok, it moves alongside drem.
>
>> Likewise, helper functions for the classification macros,
>> and likely support macros for transitioning long double:
>>
>> __finite __fpclassify
>> __signbit __issignaling
>
> Plus some others that are in libc rather than libm (__isinf, __isnan).
>
> __finite and __signbit are in both libraries. It's not clear that being
> exported from both libraries rather than just one is desirable for new
> types.
>
Is the intent to avoid linking libm when using the classification
macros on a compiler without a builtin, a side effect of their usage
in std* functions, or something else?
Would there be a problem defining these as static inlined functions
to avoid the ABI?
>> Leaving us with a very small set of ABI/API which should
>> neither be defined nor exported as is, but may be
>> exposed in some capacity to support format transitions
>> of existing types:
>>
>> drem nexttoward scalb finite gamma
>
> Where drem and gamma are obviously aliases for other functions that *are*
> exported for the new types, so supporting dreml and gammal for long double
> = binary128 doesn't require building extra code (and as aliases, they
> don't affect the makefiles either). Likewise finitel (alias for
> __finitef128). And indeed nexttowardl (alias for nextafterf128) - but
> nexttoward would instead need separate variants for float and double
> paired with the new long double type, much like the existing variants for
> long double = double. scalbl would involve building new code for the new
> type when used as long double, that wouldn't be built when used only as
> __float128; likewise significandl, if moved to this group of functions.
>
> (Note that where functions are being considered obsolete, we should make
> sure the manual documents them as such and says what API is preferred
> instead. As I previously noted, it needs to be made to document
> remainder, not drem, as the primary name for that function.)
Yes, those will need to be addressed when transitioning the long double
type. The aliasing appears like it will get messy. Thinking out loud
here, is it possible to isolate the mapping of the local symbols to
public symbols to their own file/header?
Ideally, libc additions should be minimal, for each type:
From TS 18661-3:
strto strfrom
From GNU, keeping symmetry with
strfrom*_l strto*_l wscto wscto*_l
wcsfrom wscfrom*_l
Depending on what is possible, the support ABI for the
classification macros may end up in libc too.
Likewise, after the next round of feedback, I will reply to
https://sourceware.org/ml/libc-alpha/2016-05/msg00090.html with
the updated list.
On Wed, 8 Jun 2016, Paul E. Murphy wrote:
> By tacit agreement, the community does not object to adding support for
> new types and functioned outlined by TS 18661. For practical reasons,
Subject to consensus on the precise set to be added - this does not mean
arbitrary subsets should be added in isolation. Hence the attempt to
define things in terms of all the TS 18661-3 interfaces for _Float128
(functions and others) where glibc has the corresponding interfaces for
other types, plus analogues of other glibc interfaces to be determined
case-by-case.
> > Plus some others that are in libc rather than libm (__isinf, __isnan).
> >
> > __finite and __signbit are in both libraries. It's not clear that being
> > exported from both libraries rather than just one is desirable for new
> > types.
> >
>
> Is the intent to avoid linking libm when using the classification
> macros on a compiler without a builtin, a side effect of their usage
> in std* functions, or something else?
I think some functions are included in libc because of use elsewhere in
libc (e.g. in stdio) of classification macros. It's possible the
underlying functions are not all now used in libc since we moved to using
__builtin_* for the classification macros where possible in the absence of
-fsignaling-nans and moved to using those macros in glibc code instead of
calling __* directly (but it's also possible we'll find a need in future
to build bits of libc with -fsignaling-nans).
I don't think there's any goal to avoid linking libm for user code using
these macros.
Exporting functions from multiple libraries is confusing and wasteful of
space, hence the question of whether new functions like that should go
only in one library (and if so, which).
> Would there be a problem defining these as static inlined functions
> to avoid the ABI?
None of the macros used in their existing implementations are in the
implementation namespace, nor the structures used, and you'd have to deal
with endianness issues in defining such functions/structures as well as
the namespace issues. It seems safer to have corresponding underlying
functions.
> Yes, those will need to be addressed when transitioning the long double
> type. The aliasing appears like it will get messy. Thinking out loud
> here, is it possible to isolate the mapping of the local symbols to
> public symbols to their own file/header?
Mostly, the macros used in math.h when including bits/mathcalls.h should
be adaptable to handle rules like "redirect *l to __*f128", and similarly
for complex.h and bits/math-finite.h. One design question is how to
arrange the bits/*.h headers used by architectures to describe things like
this about how the main API headers should behave (regarding what types
are available and what renamings should be done).
> Ideally, libc additions should be minimal, for each type:
>
> >From TS 18661-3:
>
> strto strfrom
>
> >From GNU, keeping symmetry with
>
> strfrom*_l strto*_l wscto wscto*_l
> wcsfrom wscfrom*_l
That seems plausible (at the API level). *[efg]*cvt* are explicitly
excluded as obsolescent.
At the ABI level, various implementation-namespace function versions are
exported (with, unfortunately, a wildcard __strto*_internal in
stdlib/Versions and a similar one in wcsmbs/Versions; we should eliminate
such wildcards to avoid accidentally adding new functions to old
versions). Those include __strtold_internal __wcstold_internal
__strtold_l __wcstold_l and others for other types. Unless there is a
current use for such exports (used in inline functions or macros in
installed headers, used in libstdc++ for namespace reasons) it would be
best to make the existing exports of such functions into compat symbols
and then not export them for new types. (That does mean the strto*
grouping functionality would not be available for new types, since it's
only available through the _internal interfaces and through scanf.)
From 49154ceabd6b421c3a004ce1bcfaabfbefa1edaa Mon Sep 17 00:00:00 2001
From: "Paul E. Murphy" <murphyp@linux.vnet.ibm.com>
Date: Fri, 3 Jun 2016 15:44:05 -0500
Subject: [PATCH 2/2] [RFC] Rewrite tgmath.h helper macros
To support more types, these need to be extensible to additional
FP types. This is ugly. This assumes __builtin_types_compatible_p()
exists on much older GCC variants.
---
math/tgmath.h | 240 +++++++++++++++++-----------------------------------------
1 file changed, 71 insertions(+), 169 deletions(-)
@@ -66,179 +66,81 @@
__tgmath_real_type_sub (__typeof__ ((__typeof__ (expr)) 0), \
__floating_type (__typeof__ (expr)))
+/* Type promotion macros. */
+# define __TGT1(v) __tgmath_real_type(v)
+# define __TGT2(v1, v2) __typeof ((__TGT1 (v1)) 0 + (__TGT1 (v2)) 0)
+# define __TGT3(v1, v2, v3) __typeof ((__TGT2 (v1, v2)) 0 + (__TGT1 (v3)) 0)
+
+# define __TGCMPLXTYPE(t) __typeof ((__TGT1 (t)) 0 + _Complex_I)
+# define __TGREALTYPE(t) __typeof (__real__ (__TGT1 (t)) 0)
+# define __TGISCMPLX(v) (sizeof (v) != sizeof (__real__ (v)))
+
+# define __TGTTEMP(TYPE,FUNC,type,...) \
+ else if (__builtin_types_compatible_p (type, TYPE)) \
+ _ret = FUNC (__VA_ARGS__);
+
+# define __TG_ELIF_TYPE(M, FUNC, type, ...) \
+ __TGTTEMP (M long double, __tgml (FUNC), type, __VA_ARGS__) \
+ __TGTTEMP (M double, FUNC, type, __VA_ARGS__) \
+ __TGTTEMP (M float, FUNC ## f, type, __VA_ARGS__)
+
+# define __TGTEMP_BASE(M, it, rt, f, ...) \
+ ({ \
+ __typeof__ (rt) _ret = 0; \
+ if(0) \
+ {} \
+ __TG_ELIF_TYPE (M, f, it, __VA_ARGS__) \
+ else \
+ _ret = f (__VA_ARGS__); \
+ _ret; \
+ })
+
+# define __TGTEMP_REAL(it, rt, f, ...) \
+ __TGTEMP_BASE (, it, rt, f, __VA_ARGS__)
+
+# define __TGTEMP_REAL_CMPLX(it, rt, f, cf, ...) \
+ ({ \
+ __typeof__ (rt) _ret = 0; \
+ if (__TGISCMPLX ((it) 0)) \
+ _ret = __TGTEMP_BASE (_Complex, it, rt, cf, __VA_ARGS__); \
+ else \
+ _ret = __TGTEMP_BASE (, it, rt, f, __VA_ARGS__); \
+ _ret; \
+ })
/* We have two kinds of generic macros: to support functions which are
only defined on real valued parameters and those which are defined
for complex functions as well. */
-# define __TGMATH_UNARY_REAL_ONLY(Val, Fct) \
- (__extension__ ((sizeof (Val) == sizeof (double) \
- || __builtin_classify_type (Val) != 8) \
- ? (__tgmath_real_type (Val)) Fct (Val) \
- : (sizeof (Val) == sizeof (float)) \
- ? (__tgmath_real_type (Val)) Fct##f (Val) \
- : (__tgmath_real_type (Val)) __tgml(Fct) (Val)))
-
-# define __TGMATH_UNARY_REAL_RET_ONLY(Val, RetType, Fct) \
- (__extension__ ((sizeof (Val) == sizeof (double) \
- || __builtin_classify_type (Val) != 8) \
- ? (RetType) Fct (Val) \
- : (sizeof (Val) == sizeof (float)) \
- ? (RetType) Fct##f (Val) \
- : (RetType) __tgml(Fct) (Val)))
-
-# define __TGMATH_BINARY_FIRST_REAL_ONLY(Val1, Val2, Fct) \
- (__extension__ ((sizeof (Val1) == sizeof (double) \
- || __builtin_classify_type (Val1) != 8) \
- ? (__tgmath_real_type (Val1)) Fct (Val1, Val2) \
- : (sizeof (Val1) == sizeof (float)) \
- ? (__tgmath_real_type (Val1)) Fct##f (Val1, Val2) \
- : (__tgmath_real_type (Val1)) __tgml(Fct) (Val1, Val2)))
-
-# define __TGMATH_BINARY_REAL_ONLY(Val1, Val2, Fct) \
- (__extension__ (((sizeof (Val1) > sizeof (double) \
- || sizeof (Val2) > sizeof (double)) \
- && __builtin_classify_type ((Val1) + (Val2)) == 8) \
- ? (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0)) \
- __tgml(Fct) (Val1, Val2) \
- : (sizeof (Val1) == sizeof (double) \
- || sizeof (Val2) == sizeof (double) \
- || __builtin_classify_type (Val1) != 8 \
- || __builtin_classify_type (Val2) != 8) \
- ? (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0)) \
- Fct (Val1, Val2) \
- : (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0)) \
- Fct##f (Val1, Val2)))
-
-# define __TGMATH_TERNARY_FIRST_SECOND_REAL_ONLY(Val1, Val2, Val3, Fct) \
- (__extension__ (((sizeof (Val1) > sizeof (double) \
- || sizeof (Val2) > sizeof (double)) \
- && __builtin_classify_type ((Val1) + (Val2)) == 8) \
- ? (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0)) \
- __tgml(Fct) (Val1, Val2, Val3) \
- : (sizeof (Val1) == sizeof (double) \
- || sizeof (Val2) == sizeof (double) \
- || __builtin_classify_type (Val1) != 8 \
- || __builtin_classify_type (Val2) != 8) \
- ? (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0)) \
- Fct (Val1, Val2, Val3) \
- : (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0)) \
- Fct##f (Val1, Val2, Val3)))
-
-# define __TGMATH_TERNARY_REAL_ONLY(Val1, Val2, Val3, Fct) \
- (__extension__ (((sizeof (Val1) > sizeof (double) \
- || sizeof (Val2) > sizeof (double) \
- || sizeof (Val3) > sizeof (double)) \
- && __builtin_classify_type ((Val1) + (Val2) + (Val3)) \
- == 8) \
- ? (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0 \
- + (__tgmath_real_type (Val3)) 0)) \
- __tgml(Fct) (Val1, Val2, Val3) \
- : (sizeof (Val1) == sizeof (double) \
- || sizeof (Val2) == sizeof (double) \
- || sizeof (Val3) == sizeof (double) \
- || __builtin_classify_type (Val1) != 8 \
- || __builtin_classify_type (Val2) != 8 \
- || __builtin_classify_type (Val3) != 8) \
- ? (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0 \
- + (__tgmath_real_type (Val3)) 0)) \
- Fct (Val1, Val2, Val3) \
- : (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0 \
- + (__tgmath_real_type (Val3)) 0)) \
- Fct##f (Val1, Val2, Val3)))
-
-/* XXX This definition has to be changed as soon as the compiler understands
- the imaginary keyword. */
-# define __TGMATH_UNARY_REAL_IMAG(Val, Fct, Cfct) \
- (__extension__ ((sizeof (__real__ (Val)) == sizeof (double) \
- || __builtin_classify_type (__real__ (Val)) != 8) \
- ? ((sizeof (__real__ (Val)) == sizeof (Val)) \
- ? (__tgmath_real_type (Val)) Fct (Val) \
- : (__tgmath_real_type (Val)) Cfct (Val)) \
- : (sizeof (__real__ (Val)) == sizeof (float)) \
- ? ((sizeof (__real__ (Val)) == sizeof (Val)) \
- ? (__tgmath_real_type (Val)) Fct##f (Val) \
- : (__tgmath_real_type (Val)) Cfct##f (Val)) \
- : ((sizeof (__real__ (Val)) == sizeof (Val)) \
- ? (__tgmath_real_type (Val)) __tgml(Fct) (Val) \
- : (__tgmath_real_type (Val)) __tgml(Cfct) (Val))))
-
-# define __TGMATH_UNARY_IMAG(Val, Cfct) \
- (__extension__ ((sizeof (__real__ (Val)) == sizeof (double) \
- || __builtin_classify_type (__real__ (Val)) != 8) \
- ? (__typeof__ ((__tgmath_real_type (Val)) 0 \
- + _Complex_I)) Cfct (Val) \
- : (sizeof (__real__ (Val)) == sizeof (float)) \
- ? (__typeof__ ((__tgmath_real_type (Val)) 0 \
- + _Complex_I)) Cfct##f (Val) \
- : (__typeof__ ((__tgmath_real_type (Val)) 0 \
- + _Complex_I)) __tgml(Cfct) (Val)))
-
-/* XXX This definition has to be changed as soon as the compiler understands
- the imaginary keyword. */
-# define __TGMATH_UNARY_REAL_IMAG_RET_REAL(Val, Fct, Cfct) \
- (__extension__ ((sizeof (__real__ (Val)) == sizeof (double) \
- || __builtin_classify_type (__real__ (Val)) != 8) \
- ? ((sizeof (__real__ (Val)) == sizeof (Val)) \
- ? (__typeof__ (__real__ (__tgmath_real_type (Val)) 0))\
- Fct (Val) \
- : (__typeof__ (__real__ (__tgmath_real_type (Val)) 0))\
- Cfct (Val)) \
- : (sizeof (__real__ (Val)) == sizeof (float)) \
- ? ((sizeof (__real__ (Val)) == sizeof (Val)) \
- ? (__typeof__ (__real__ (__tgmath_real_type (Val)) 0))\
- Fct##f (Val) \
- : (__typeof__ (__real__ (__tgmath_real_type (Val)) 0))\
- Cfct##f (Val)) \
- : ((sizeof (__real__ (Val)) == sizeof (Val)) \
- ? (__typeof__ (__real__ (__tgmath_real_type (Val)) 0))\
- __tgml(Fct) (Val) \
- : (__typeof__ (__real__ (__tgmath_real_type (Val)) 0))\
- __tgml(Cfct) (Val))))
-
-/* XXX This definition has to be changed as soon as the compiler understands
- the imaginary keyword. */
-# define __TGMATH_BINARY_REAL_IMAG(Val1, Val2, Fct, Cfct) \
- (__extension__ (((sizeof (__real__ (Val1)) > sizeof (double) \
- || sizeof (__real__ (Val2)) > sizeof (double)) \
- && __builtin_classify_type (__real__ (Val1) \
- + __real__ (Val2)) == 8) \
- ? ((sizeof (__real__ (Val1)) == sizeof (Val1) \
- && sizeof (__real__ (Val2)) == sizeof (Val2)) \
- ? (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0)) \
- __tgml(Fct) (Val1, Val2) \
- : (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0)) \
- __tgml(Cfct) (Val1, Val2)) \
- : (sizeof (__real__ (Val1)) == sizeof (double) \
- || sizeof (__real__ (Val2)) == sizeof (double) \
- || __builtin_classify_type (__real__ (Val1)) != 8 \
- || __builtin_classify_type (__real__ (Val2)) != 8) \
- ? ((sizeof (__real__ (Val1)) == sizeof (Val1) \
- && sizeof (__real__ (Val2)) == sizeof (Val2)) \
- ? (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0)) \
- Fct (Val1, Val2) \
- : (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0)) \
- Cfct (Val1, Val2)) \
- : ((sizeof (__real__ (Val1)) == sizeof (Val1) \
- && sizeof (__real__ (Val2)) == sizeof (Val2)) \
- ? (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0)) \
- Fct##f (Val1, Val2) \
- : (__typeof ((__tgmath_real_type (Val1)) 0 \
- + (__tgmath_real_type (Val2)) 0)) \
- Cfct##f (Val1, Val2))))
+# define __TGMATH_UNARY_REAL_ONLY(V,Fct) \
+ __TGTEMP_REAL (__TGT1 (V), __TGT1 (V), Fct, V)
+
+# define __TGMATH_UNARY_REAL_RET_ONLY(V, RetType, Fct) \
+ __TGTEMP_REAL (__TGT1 (V), RetType, Fct, V)
+
+# define __TGMATH_BINARY_FIRST_REAL_ONLY(V1, V2, Fct) \
+ __TGTEMP_REAL (__TGT1 (V1), __TGT1 (V1), Fct, V1, V2)
+
+# define __TGMATH_BINARY_REAL_ONLY(V1, V2, Fct) \
+ __TGTEMP_REAL (__TGT2 (V1, V2), __TGT2 (V1, V2), Fct, V1, V2)
+
+# define __TGMATH_TERNARY_FIRST_SECOND_REAL_ONLY(V1, V2, V3, Fct) \
+ __TGTEMP_REAL (__TGT2 (V1, V2), __TGT2 (V1, V2), Fct, V1, V2, V3)
+
+# define __TGMATH_TERNARY_REAL_ONLY(V1, V2, V3, F) \
+ __TGTEMP_REAL (__TGT3 (V1, V2, V3), __TGT3 (V1, V2, V3), F, V1, V2, V3)
+
+# define __TGMATH_UNARY_REAL_IMAG(V, Fct, Cfct) \
+ __TGTEMP_REAL_CMPLX (__TGT1 (V), __TGT1 (V), Fct, Cfct, V)
+
+# define __TGMATH_UNARY_IMAG(V, Cfct) \
+ __TGTEMP_REAL_CMPLX (__TGT1 (V), __TGCMPLXTYPE (V), Cfct, Cfct, V)
+
+# define __TGMATH_UNARY_REAL_IMAG_RET_REAL(V, Fct, Cfct) \
+ __TGTEMP_REAL_CMPLX (__TGT1 (V), __TGREALTYPE (V), Fct, Cfct, V)
+
+# define __TGMATH_BINARY_REAL_IMAG(V1, V2, F, CF) \
+ __TGTEMP_REAL_CMPLX (__TGT2 (V1, V2), __TGT2 (V1, V2), F, CF, V1, V2)
+
#else
# error "Unsupported compiler; you cannot use <tgmath.h>"
#endif
--
2.4.11