From patchwork Mon Aug 15 10:12:02 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Jelinek X-Patchwork-Id: 56740 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id E433B385800D for ; Mon, 15 Aug 2022 10:12:50 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org E433B385800D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1660558370; bh=Y01ZoaCvmnkZOSDvi9ajmDPTM+Smn93VQiEMEpzNWL4=; h=Date:To:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=NRkVywqBUtcG3eLYaIQvw+GTtqfvyN4YU7CyYwULJoMbRZqXnBErmmoPyNjqsMjhS QktnXbE29e22Dlfk054c4qLdBv8yzhyiRzenbHuKEwbWGLSeDWJL2FHz0+0JOMQhEN 9QIYq2NUJTW6/ksjrnPlZHnR24cEnXxgD3oMDI4E= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTPS id A34313858C74 for ; Mon, 15 Aug 2022 10:12:17 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org A34313858C74 Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-563-fJTZd1dpPjasc3dSJ2oCCA-1; Mon, 15 Aug 2022 06:12:13 -0400 X-MC-Unique: fJTZd1dpPjasc3dSJ2oCCA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id D0D713C0D188; Mon, 15 Aug 2022 10:12:12 +0000 (UTC) Received: from tucnak.zalov.cz (unknown [10.39.192.41]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 3CDCA40CFD0B; Mon, 15 Aug 2022 10:12:12 +0000 (UTC) Received: from tucnak.zalov.cz (localhost [127.0.0.1]) by tucnak.zalov.cz (8.17.1/8.17.1) with ESMTPS id 27FAC8Ex1240865 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Mon, 15 Aug 2022 12:12:09 +0200 Received: (from jakub@localhost) by tucnak.zalov.cz (8.17.1/8.17.1/Submit) id 27FAC7Vb1240864; Mon, 15 Aug 2022 12:12:07 +0200 Date: Mon, 15 Aug 2022 12:12:02 +0200 To: Richard Biener , "Joseph S. Myers" , Jeff Law Subject: [PATCH] Implement __builtin_issignaling Message-ID: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Disposition: inline X-Spam-Status: No, score=-4.5 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, KAM_SHORT, RCVD_IN_DNSWL_LOW, SPF_HELO_NONE, SPF_NONE, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Jakub Jelinek via Gcc-patches From: Jakub Jelinek Reply-To: Jakub Jelinek Cc: gcc-patches@gcc.gnu.org, FX Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Hi! The following patch implements a new builtin, __builtin_issignaling, which can be used to implement the ISO/IEC TS 18661-1 issignaling macro. It is implemented as type-generic function, so there is just one builtin, not many with various suffixes. This patch doesn't address PR56831 nor PR58416, but I think compared to using glibc issignaling macro could make some cases better (as the builtin is expanded always inline and for SFmode/DFmode just reinterprets a memory or pseudo register as SImode/DImode, so could avoid some raising of exception + turning sNaN into qNaN before the builtin can analyze it). For floading point modes that do not have NaNs it will return 0, otherwise I've tried to implement this for all the other supported real formats. It handles both the MIPS/PA floats where a sNaN has the mantissa MSB set and the rest where a sNaN has it cleared, with the exception of format which are known never to be in the MIPS/PA form. The MIPS/PA floats are handled using a test like (x & mask) == mask, the other usually as ((x ^ bit) & mask) > val where bit, mask and val are some constants. IBM double double is done by doing DFmode test on the most significant half, and Intel/Motorola extended (12 or 16 bytes) and IEEE quad are handled by extracting 32-bit/16-bit words or 64-bit parts from the value and testing those. On x86, XFmode is handled by a special optab so that even pseudo numbers are considered signaling, like in glibc and like the i386 specific testcase tests. Bootstrapped/regtested on x86_64-linux, i686-linux, powerpc64le-linux and powerpc64-linux (the last tested with -m32/-m64), ok for trunk? 2022-08-15 Jakub Jelinek gcc/ * builtins.def (BUILT_IN_ISSIGNALING): New built-in. * builtins.cc (expand_builtin_issignaling): New function. (expand_builtin_signbit): Don't overwrite target. (expand_builtin): Handle BUILT_IN_ISSIGNALING. (fold_builtin_classify): Likewise. (fold_builtin_1): Likewise. * optabs.def (issignaling_optab): New. * fold-const-call.cc (fold_const_call_ss): Handle BUILT_IN_ISSIGNALING. * config/i386/i386.md (issignalingxf2): New expander. * doc/extend.texi (__builtin_issignaling): Document. * doc/md.texi (issignaling2): Likewise. gcc/c-family/ * c-common.cc (check_builtin_function_arguments): Handle BUILT_IN_ISSIGNALING. gcc/c/ * c-typeck.cc (convert_arguments): Handle BUILT_IN_ISSIGNALING. gcc/fortran/ * f95-lang.cc (gfc_init_builtin_functions): Initialize BUILT_IN_ISSIGNALING. gcc/testsuite/ * gcc.dg/torture/builtin-issignaling-1.c: New test. * gcc.dg/torture/builtin-issignaling-2.c: New test. * gcc.target/i386/builtin-issignaling-1.c: New test. Jakub --- gcc/builtins.def.jj 2022-01-11 23:11:21.548301986 +0100 +++ gcc/builtins.def 2022-08-11 12:15:14.200908656 +0200 @@ -908,6 +908,7 @@ DEF_GCC_BUILTIN (BUILT_IN_ISLESS, DEF_GCC_BUILTIN (BUILT_IN_ISLESSEQUAL, "islessequal", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF) DEF_GCC_BUILTIN (BUILT_IN_ISLESSGREATER, "islessgreater", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF) DEF_GCC_BUILTIN (BUILT_IN_ISUNORDERED, "isunordered", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF) +DEF_GCC_BUILTIN (BUILT_IN_ISSIGNALING, "issignaling", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF) DEF_LIB_BUILTIN (BUILT_IN_LABS, "labs", BT_FN_LONG_LONG, ATTR_CONST_NOTHROW_LEAF_LIST) DEF_C99_BUILTIN (BUILT_IN_LLABS, "llabs", BT_FN_LONGLONG_LONGLONG, ATTR_CONST_NOTHROW_LEAF_LIST) DEF_GCC_BUILTIN (BUILT_IN_LONGJMP, "longjmp", BT_FN_VOID_PTR_INT, ATTR_NORETURN_NOTHROW_LIST) --- gcc/builtins.cc.jj 2022-07-26 10:32:23.250277352 +0200 +++ gcc/builtins.cc 2022-08-12 17:13:06.158423558 +0200 @@ -123,6 +123,7 @@ static rtx expand_builtin_fegetround (tr static rtx expand_builtin_feclear_feraise_except (tree, rtx, machine_mode, optab); static rtx expand_builtin_cexpi (tree, rtx); +static rtx expand_builtin_issignaling (tree, rtx); static rtx expand_builtin_int_roundingfn (tree, rtx); static rtx expand_builtin_int_roundingfn_2 (tree, rtx); static rtx expand_builtin_next_arg (void); @@ -2747,6 +2748,294 @@ build_call_nofold_loc (location_t loc, t return fn; } +/* Expand the __builtin_issignaling builtin. This needs to handle + all floating point formats that do support NaNs (for those that + don't it just sets target to 0). */ + +static rtx +expand_builtin_issignaling (tree exp, rtx target) +{ + if (!validate_arglist (exp, REAL_TYPE, VOID_TYPE)) + return NULL_RTX; + + tree arg = CALL_EXPR_ARG (exp, 0); + scalar_float_mode fmode = SCALAR_FLOAT_TYPE_MODE (TREE_TYPE (arg)); + const struct real_format *fmt = REAL_MODE_FORMAT (fmode); + + /* Expand the argument yielding a RTX expression. */ + rtx temp = expand_normal (arg); + + /* If mode doesn't support NaN, always return 0. */ + if (!HONOR_NANS (fmode)) + { + emit_move_insn (target, const0_rtx); + return target; + } + + /* Check if the back end provides an insn that handles issignaling for the + argument's mode. */ + enum insn_code icode = optab_handler (issignaling_optab, fmode); + if (icode != CODE_FOR_nothing) + { + rtx_insn *last = get_last_insn (); + rtx this_target = gen_reg_rtx (TYPE_MODE (TREE_TYPE (exp))); + if (maybe_emit_unop_insn (icode, this_target, temp, UNKNOWN)) + return this_target; + delete_insns_since (last); + } + + if (DECIMAL_FLOAT_MODE_P (fmode)) + { + scalar_int_mode imode; + rtx hi; + switch (fmt->ieee_bits) + { + case 32: + case 64: + imode = int_mode_for_mode (fmode).require (); + temp = gen_lowpart (imode, temp); + break; + case 128: + imode = int_mode_for_size (64, 1).require (); + hi = NULL_RTX; + /* For decimal128, TImode support isn't always there and even when + it is, working on the DImode high part is usually better. */ + if (!MEM_P (temp)) + { + if (rtx t = simplify_gen_subreg (imode, temp, fmode, + subreg_highpart_offset (imode, + fmode))) + hi = t; + else + { + scalar_int_mode imode2; + if (int_mode_for_mode (fmode).exists (&imode2)) + { + rtx temp2 = gen_lowpart (imode2, temp); + poly_uint64 off = subreg_highpart_offset (imode, imode2); + if (rtx t = simplify_gen_subreg (imode, temp2, + imode2, off)) + hi = t; + } + } + if (!hi) + { + rtx mem = assign_stack_temp (fmode, GET_MODE_SIZE (fmode)); + emit_move_insn (mem, temp); + temp = mem; + } + } + if (!hi) + { + poly_int64 offset + = subreg_highpart_offset (imode, GET_MODE (temp)); + hi = adjust_address (temp, imode, offset); + } + temp = hi; + break; + default: + gcc_unreachable (); + } + /* In all of decimal{32,64,128}, there is MSB sign bit and sNaN + have 6 bits below it all set. */ + rtx val + = GEN_INT (HOST_WIDE_INT_C (0x3f) << (GET_MODE_BITSIZE (imode) - 7)); + temp = expand_binop (imode, and_optab, temp, val, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + temp = emit_store_flag_force (target, EQ, temp, val, imode, 1, 1); + return temp; + } + + /* Only PDP11 has these defined differently but doesn't support NaNs. */ + gcc_assert (FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN); + gcc_assert (fmt->signbit_ro > 0 && fmt->b == 2); + gcc_assert (MODE_COMPOSITE_P (fmode) + || (fmt->pnan == fmt->p + && fmt->signbit_ro == fmt->signbit_rw)); + + switch (fmt->p) + { + case 106: /* IBM double double */ + /* For IBM double double, recurse on the most significant double. */ + gcc_assert (MODE_COMPOSITE_P (fmode)); + temp = convert_modes (DFmode, fmode, temp, 0); + fmode = DFmode; + fmt = REAL_MODE_FORMAT (DFmode); + /* FALLTHRU */ + case 8: /* bfloat */ + case 11: /* IEEE half */ + case 24: /* IEEE single */ + case 53: /* IEEE double or Intel extended with rounding to double */ + if (fmt->p == 53 && fmt->signbit_ro == 79) + goto extended; + { + scalar_int_mode imode = int_mode_for_mode (fmode).require (); + temp = gen_lowpart (imode, temp); + rtx val = GEN_INT ((HOST_WIDE_INT_M1U << (fmt->p - 2)) + & ~(HOST_WIDE_INT_M1U << fmt->signbit_ro)); + if (fmt->qnan_msb_set) + { + rtx mask = GEN_INT (~(HOST_WIDE_INT_M1U << fmt->signbit_ro)); + rtx bit = GEN_INT (HOST_WIDE_INT_1U << (fmt->p - 2)); + /* For non-MIPS/PA IEEE single/double/half or bfloat, expand to: + ((temp ^ bit) & mask) > val. */ + temp = expand_binop (imode, xor_optab, temp, bit, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + temp = expand_binop (imode, and_optab, temp, mask, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + temp = emit_store_flag_force (target, GTU, temp, val, imode, + 1, 1); + } + else + { + /* For MIPS/PA IEEE single/double, expand to: + (temp & val) == val. */ + temp = expand_binop (imode, and_optab, temp, val, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + temp = emit_store_flag_force (target, EQ, temp, val, imode, + 1, 1); + } + } + break; + case 113: /* IEEE quad */ + { + rtx hi = NULL_RTX, lo = NULL_RTX; + scalar_int_mode imode = int_mode_for_size (64, 1).require (); + /* For IEEE quad, TImode support isn't always there and even when + it is, working on DImode parts is usually better. */ + if (!MEM_P (temp)) + { + hi = simplify_gen_subreg (imode, temp, fmode, + subreg_highpart_offset (imode, fmode)); + lo = simplify_gen_subreg (imode, temp, fmode, + subreg_lowpart_offset (imode, fmode)); + if (!hi || !lo) + { + scalar_int_mode imode2; + if (int_mode_for_mode (fmode).exists (&imode2)) + { + rtx temp2 = gen_lowpart (imode2, temp); + hi = simplify_gen_subreg (imode, temp2, imode2, + subreg_highpart_offset (imode, + imode2)); + lo = simplify_gen_subreg (imode, temp2, imode2, + subreg_lowpart_offset (imode, + imode2)); + } + } + if (!hi || !lo) + { + rtx mem = assign_stack_temp (fmode, GET_MODE_SIZE (fmode)); + emit_move_insn (mem, temp); + temp = mem; + } + } + if (!hi || !lo) + { + poly_int64 offset + = subreg_highpart_offset (imode, GET_MODE (temp)); + hi = adjust_address (temp, imode, offset); + offset = subreg_lowpart_offset (imode, GET_MODE (temp)); + lo = adjust_address (temp, imode, offset); + } + rtx val = GEN_INT ((HOST_WIDE_INT_M1U << (fmt->p - 2 - 64)) + & ~(HOST_WIDE_INT_M1U << (fmt->signbit_ro - 64))); + if (fmt->qnan_msb_set) + { + rtx mask = GEN_INT (~(HOST_WIDE_INT_M1U << (fmt->signbit_ro + - 64))); + rtx bit = GEN_INT (HOST_WIDE_INT_1U << (fmt->p - 2 - 64)); + /* For non-MIPS/PA IEEE quad, expand to: + (((hi ^ bit) | ((lo | -lo) >> 63)) & mask) > val. */ + rtx nlo = expand_unop (imode, neg_optab, lo, NULL_RTX, 0); + lo = expand_binop (imode, ior_optab, lo, nlo, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + lo = expand_shift (RSHIFT_EXPR, imode, lo, 63, NULL_RTX, 1); + temp = expand_binop (imode, xor_optab, hi, bit, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + temp = expand_binop (imode, ior_optab, temp, lo, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + temp = expand_binop (imode, and_optab, temp, mask, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + temp = emit_store_flag_force (target, GTU, temp, val, imode, + 1, 1); + } + else + { + /* For MIPS/PA IEEE quad, expand to: + (hi & val) == val. */ + temp = expand_binop (imode, and_optab, hi, val, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + temp = emit_store_flag_force (target, EQ, temp, val, imode, + 1, 1); + } + } + break; + case 64: /* Intel or Motorola extended */ + extended: + { + rtx ex, hi, lo; + scalar_int_mode imode = int_mode_for_size (32, 1).require (); + scalar_int_mode iemode = int_mode_for_size (16, 1).require (); + if (!MEM_P (temp)) + { + rtx mem = assign_stack_temp (fmode, GET_MODE_SIZE (fmode)); + emit_move_insn (mem, temp); + temp = mem; + } + if (fmt->signbit_ro == 95) + { + /* Motorola, always big endian, with 16-bit gap in between + 16-bit sign+exponent and 64-bit mantissa. */ + ex = adjust_address (temp, iemode, 0); + hi = adjust_address (temp, imode, 4); + lo = adjust_address (temp, imode, 8); + } + else if (!WORDS_BIG_ENDIAN) + { + /* Intel little endian, 64-bit mantissa followed by 16-bit + sign+exponent and then either 16 or 48 bits of gap. */ + ex = adjust_address (temp, iemode, 8); + hi = adjust_address (temp, imode, 4); + lo = adjust_address (temp, imode, 0); + } + else + { + /* Big endian Itanium. */ + ex = adjust_address (temp, iemode, 0); + hi = adjust_address (temp, imode, 2); + lo = adjust_address (temp, imode, 6); + } + rtx val = GEN_INT (HOST_WIDE_INT_M1U << 30); + gcc_assert (fmt->qnan_msb_set); + rtx mask = GEN_INT (0x7fff); + rtx bit = GEN_INT (HOST_WIDE_INT_1U << 30); + /* For Intel/Motorola extended format, expand to: + (ex & mask) == mask && ((hi ^ bit) | ((lo | -lo) >> 31)) > val. */ + rtx nlo = expand_unop (imode, neg_optab, lo, NULL_RTX, 0); + lo = expand_binop (imode, ior_optab, lo, nlo, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + lo = expand_shift (RSHIFT_EXPR, imode, lo, 31, NULL_RTX, 1); + temp = expand_binop (imode, xor_optab, hi, bit, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + temp = expand_binop (imode, ior_optab, temp, lo, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + temp = emit_store_flag_force (target, GTU, temp, val, imode, 1, 1); + ex = expand_binop (iemode, and_optab, ex, mask, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + ex = emit_store_flag_force (gen_reg_rtx (GET_MODE (temp)), EQ, + ex, mask, iemode, 1, 1); + temp = expand_binop (GET_MODE (temp), and_optab, temp, ex, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + } + break; + default: + gcc_unreachable (); + } + + return temp; +} + /* Expand a call to one of the builtin rounding functions gcc defines as an extension (lfloor and lceil). As these are gcc extensions we do not need to worry about setting errno to EDOM. @@ -5508,9 +5797,9 @@ expand_builtin_signbit (tree exp, rtx ta if (icode != CODE_FOR_nothing) { rtx_insn *last = get_last_insn (); - target = gen_reg_rtx (TYPE_MODE (TREE_TYPE (exp))); - if (maybe_emit_unop_insn (icode, target, temp, UNKNOWN)) - return target; + rtx this_target = gen_reg_rtx (TYPE_MODE (TREE_TYPE (exp))); + if (maybe_emit_unop_insn (icode, this_target, temp, UNKNOWN)) + return this_target; delete_insns_since (last); } @@ -7120,6 +7409,12 @@ expand_builtin (tree exp, rtx target, rt return target; break; + case BUILT_IN_ISSIGNALING: + target = expand_builtin_issignaling (exp, target); + if (target) + return target; + break; + CASE_FLT_FN (BUILT_IN_ICEIL): CASE_FLT_FN (BUILT_IN_LCEIL): CASE_FLT_FN (BUILT_IN_LLCEIL): @@ -8963,6 +9258,11 @@ fold_builtin_classify (location_t loc, t arg = builtin_save_expr (arg); return fold_build2_loc (loc, UNORDERED_EXPR, type, arg, arg); + case BUILT_IN_ISSIGNALING: + if (!tree_expr_maybe_nan_p (arg)) + return omit_one_operand_loc (loc, type, integer_zero_node, arg); + return NULL_TREE; + default: gcc_unreachable (); } @@ -9399,6 +9699,9 @@ fold_builtin_1 (location_t loc, tree exp case BUILT_IN_ISNAND128: return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISNAN); + case BUILT_IN_ISSIGNALING: + return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISSIGNALING); + case BUILT_IN_FREE: if (integer_zerop (arg0)) return build_empty_stmt (loc); --- gcc/optabs.def.jj 2022-02-04 14:36:55.424599447 +0100 +++ gcc/optabs.def 2022-08-11 13:06:09.888416939 +0200 @@ -313,6 +313,7 @@ OPTAB_D (fmod_optab, "fmod$a3") OPTAB_D (hypot_optab, "hypot$a3") OPTAB_D (ilogb_optab, "ilogb$a2") OPTAB_D (isinf_optab, "isinf$a2") +OPTAB_D (issignaling_optab, "issignaling$a2") OPTAB_D (ldexp_optab, "ldexp$a3") OPTAB_D (log10_optab, "log10$a2") OPTAB_D (log1p_optab, "log1p$a2") --- gcc/fold-const-call.cc.jj 2022-01-18 11:58:59.510983085 +0100 +++ gcc/fold-const-call.cc 2022-08-11 12:31:07.294918860 +0200 @@ -952,6 +952,10 @@ fold_const_call_ss (wide_int *result, co *result = wi::shwi (real_isfinite (arg) ? 1 : 0, precision); return true; + case CFN_BUILT_IN_ISSIGNALING: + *result = wi::shwi (real_issignaling_nan (arg) ? 1 : 0, precision); + return true; + CASE_CFN_ISINF: case CFN_BUILT_IN_ISINFD32: case CFN_BUILT_IN_ISINFD64: --- gcc/config/i386/i386.md.jj 2022-08-10 09:06:51.463232943 +0200 +++ gcc/config/i386/i386.md 2022-08-12 11:56:14.763951760 +0200 @@ -24720,6 +24720,58 @@ (define_expand "spaceshipxf3" DONE; }) +;; Defined because the generic expand_builtin_issignaling for XFmode +;; only tests for sNaNs, but i387 treats also pseudo numbers as always +;; signaling. +(define_expand "issignalingxf2" + [(match_operand:SI 0 "register_operand") + (match_operand:XF 1 "general_operand")] + "" +{ + rtx temp = operands[1]; + if (!MEM_P (temp)) + { + rtx mem = assign_stack_temp (XFmode, GET_MODE_SIZE (XFmode)); + emit_move_insn (mem, temp); + temp = mem; + } + rtx ex = adjust_address (temp, HImode, 8); + rtx hi = adjust_address (temp, SImode, 4); + rtx lo = adjust_address (temp, SImode, 0); + rtx val = GEN_INT (HOST_WIDE_INT_M1U << 30); + rtx mask = GEN_INT (0x7fff); + rtx bit = GEN_INT (HOST_WIDE_INT_1U << 30); + /* Expand to: + ((ex & mask) && (int) hi >= 0) + || ((ex & mask) == mask && ((hi ^ bit) | ((lo | -lo) >> 31)) > val). */ + rtx nlo = expand_unop (SImode, neg_optab, lo, NULL_RTX, 0); + lo = expand_binop (SImode, ior_optab, lo, nlo, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + lo = expand_shift (RSHIFT_EXPR, SImode, lo, 31, NULL_RTX, 1); + temp = expand_binop (SImode, xor_optab, hi, bit, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + temp = expand_binop (SImode, ior_optab, temp, lo, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + temp = emit_store_flag_force (gen_reg_rtx (SImode), GTU, temp, val, + SImode, 1, 1); + ex = expand_binop (HImode, and_optab, ex, mask, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + rtx temp2 = emit_store_flag_force (gen_reg_rtx (SImode), NE, + ex, const0_rtx, SImode, 1, 1); + ex = emit_store_flag_force (gen_reg_rtx (SImode), EQ, + ex, mask, HImode, 1, 1); + temp = expand_binop (SImode, and_optab, temp, ex, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + rtx temp3 = emit_store_flag_force (gen_reg_rtx (SImode), GE, + hi, const0_rtx, SImode, 0, 1); + temp2 = expand_binop (SImode, and_optab, temp2, temp3, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + temp = expand_binop (SImode, ior_optab, temp, temp2, + NULL_RTX, 1, OPTAB_LIB_WIDEN); + emit_move_insn (operands[0], temp); + DONE; +}) + (include "mmx.md") (include "sse.md") (include "sync.md") --- gcc/doc/extend.texi.jj 2022-07-26 10:32:23.642272293 +0200 +++ gcc/doc/extend.texi 2022-08-11 21:57:06.727147454 +0200 @@ -13001,6 +13001,7 @@ is called and the @var{flag} argument pa @findex __builtin_isless @findex __builtin_islessequal @findex __builtin_islessgreater +@findex __builtin_issignaling @findex __builtin_isunordered @findex __builtin_object_size @findex __builtin_powi @@ -14489,6 +14490,14 @@ Similar to @code{__builtin_nans}, except @code{_Float@var{n}x}. @end deftypefn +@deftypefn {Built-in Function} int __builtin_issignaling (...) +Return non-zero if the argument is a signaling NaN and zero otherwise. +Note while the parameter list is an +ellipsis, this function only accepts exactly one floating-point +argument. GCC treats this parameter as type-generic, which means it +does not do default promotion from float to double. +@end deftypefn + @deftypefn {Built-in Function} int __builtin_ffs (int x) Returns one plus the index of the least significant 1-bit of @var{x}, or if @var{x} is zero, returns zero. --- gcc/doc/md.texi.jj 2022-06-27 11:18:02.610059335 +0200 +++ gcc/doc/md.texi 2022-08-11 22:00:11.470708501 +0200 @@ -6184,6 +6184,10 @@ floating-point mode. This pattern is not allowed to @code{FAIL}. +@cindex @code{issignaling@var{m}2} instruction pattern +@item @samp{issignaling@var{m}2} +Set operand 0 to 1 if operand 1 is a signaling NaN and to 0 otherwise. + @cindex @code{cadd90@var{m}3} instruction pattern @item @samp{cadd90@var{m}3} Perform vector add and subtract on even/odd number pairs. The operation being --- gcc/c-family/c-common.cc.jj 2022-08-10 09:06:51.214236184 +0200 +++ gcc/c-family/c-common.cc 2022-08-11 12:19:06.471714333 +0200 @@ -6294,6 +6294,7 @@ check_builtin_function_arguments (locati case BUILT_IN_ISINF_SIGN: case BUILT_IN_ISNAN: case BUILT_IN_ISNORMAL: + case BUILT_IN_ISSIGNALING: case BUILT_IN_SIGNBIT: if (builtin_function_validate_nargs (loc, fndecl, nargs, 1)) { --- gcc/c/c-typeck.cc.jj 2022-08-10 09:06:51.331234661 +0200 +++ gcc/c/c-typeck.cc 2022-08-11 12:16:20.586995677 +0200 @@ -3546,6 +3546,7 @@ convert_arguments (location_t loc, vec