From patchwork Tue Jan 23 07:11:11 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexandre Oliva X-Patchwork-Id: 84593 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id E32AC3858C50 for ; Tue, 23 Jan 2024 07:12:17 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by sourceware.org (Postfix) with ESMTPS id 994303858422 for ; Tue, 23 Jan 2024 07:11:27 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 994303858422 Authentication-Results: sourceware.org; dmarc=pass (p=quarantine dis=none) header.from=adacore.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=adacore.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 994303858422 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2607:f8b0:4864:20::435 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1705993889; cv=none; b=qMB8hZ0LiSd4WqtLoI31aD1LqxlJRJbkAO+FdatVMAS9eaum7GUUxl8aoSMe/e3oXOKXcrX7Zs/wEbbUQHZfa5RGpcXyzV61wsKKzsa8L1Ry2E0bHKyH9FcdmtdOKuYJGzA9IVV4Is5Z1joUYwZa66o955CN5R5N0oKJKT1CCgE= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1705993889; c=relaxed/simple; bh=lQGwMcODpplpn5afYhBSX45r+xGUGpk1bji3t9/LhU8=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=HsJrqVHsHnyLMG6OonTcrKlZ16YHwTU8wIAkAFtuzHJ3t8/19/Hehicy2E6+X8UkD0KZD8wxRIF61sLfYV70oI6FfY6NE6F6rOdqh3iJ3V27uASfC4srMg9xY9r6azhGd2xhfR4Mh/sWvcoLbZGH5Hfw6r03/PJISAgK2kHg93s= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-pf1-x435.google.com with SMTP id d2e1a72fcca58-6db9e52bbccso2277194b3a.3 for ; Mon, 22 Jan 2024 23:11:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=adacore.com; s=google; t=1705993886; x=1706598686; darn=gcc.gnu.org; h=mime-version:user-agent:message-id:date:organization:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=RgKtLGI5xMc/rSwdpqLbFmYqLrjSzMje67GIimPuQ5g=; b=b/Qwd3YoEXMUkXOzSnsWojzjZO3df/XUFynSMz6xxxsoPKpxHieMiJ3aH/LOcHigT1 32b7pjTwxP1ykkBkTNcdQUy+vxZKjxb+gSo40CYXMGSmOG/3ldlVL4jqPx0rjxpB7XIo y5eytHDdA9JLNVaNPWlEKJbUX7Kb2qILzay0pNiXW5WYpcbXZ3KJ7+3luEhdeyrnh8yR 0mA6llAT7wQqKtPHBaXmsFQyGieZNNw6dOM0JfdcCFro9DDPt/HIm44vw5B7/TwrwMBI FLymXqR8ygO/apKL7+4dRr23BiZG2W2+I/1a7dybiciTO+v7Dc+Cfd2Xb33c/0spm1bn rGfQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705993886; x=1706598686; h=mime-version:user-agent:message-id:date:organization:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RgKtLGI5xMc/rSwdpqLbFmYqLrjSzMje67GIimPuQ5g=; b=Zw0ZCClOpVz76ckR0l0BK+69RB/0tY+83qE1AMtwzBY6YWvNUln3+AXHQh/WxLLJQ7 B5Rn27ydIc0ulPWDl9nw1XPIei0UurtjvxxOPb1pg+a7/uIjK5RpqFLmTSrrd9WHYbEU bWz94O1gAo+USfcJYc4GbiSO1GrdBp0vPf7HnatRG//CO9YusyztUVrkPPO0tqUT/itK 8xKD45wn92xXJzvO0m/u4jqazYd9oz1L5Gz0hhKI4nW8VCCcl7qwXnN3qdF3X3FZ7VXS oQMWNvt4dPRb8mcFv9k3UPr8rit00+fyGq3zgMk2XWyvIJ2Qa74cgL3JvD+zS9b3lt3K GVtg== X-Gm-Message-State: AOJu0Yxss3EyRMyu8v9rBfLi8afI32Ysytkh8gbYNojJ7Rm+S+3n6u10 Phufcqowof0EbQHVPMWwUQRGQH+y0OwYiCxUvsF75mtjKrDvaeNI5qlgnuJYgNK01Y+AQnJ89jY = X-Google-Smtp-Source: AGHT+IHaYLaPKDgHFBtSMIr4xptufyOiX5LCoeR/asAoWlP7Ulc+Z6TyKCnfrCVZ+E/9+LYZQaqkig== X-Received: by 2002:a05:6a00:3d51:b0:6db:d390:986e with SMTP id lp17-20020a056a003d5100b006dbd390986emr2968500pfb.56.1705993886551; Mon, 22 Jan 2024 23:11:26 -0800 (PST) Received: from free.home ([2804:7f1:218b:d88:4bd5:3dd7:ea20:12bc]) by smtp.gmail.com with ESMTPSA id r2-20020a056a00216200b006dbdb5946d7sm3589117pff.6.2024.01.22.23.11.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Jan 2024 23:11:26 -0800 (PST) Received: from livre (livre.home [172.31.160.2]) by free.home (8.15.2/8.15.2) with ESMTPS id 40N7BBQT089486 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NOT); Tue, 23 Jan 2024 04:11:11 -0300 From: Alexandre Oliva To: gcc-patches@gcc.gnu.org Cc: Richard Earnshaw , Richard Sandiford , Marcus Shawcroft , Kyrylo Tkachov Subject: [PATCH] aarch64: enforce lane checking for intrinsics Organization: Free thinker, does not speak for AdaCore Date: Tue, 23 Jan 2024 04:11:11 -0300 Message-ID: User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.1 (gnu/linux) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 X-Spam-Status: No, score=-12.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE, WEIRD_QUOTING autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Calling arm_neon.h functions that take lanes as arguments may fail to report malformed values if the intrinsic happens to be optimized away, e.g. because it is pure or const and the result is unused. Adding __AARCH64_LANE_CHECK calls to the always_inline functions would duplicate errors in case the intrinsics are not optimized away; using another preprocessor macro to call either the intrinsic or __builtin_aarch64_im_lane_boundsi moves the error messages to the arm_neon.h header, and may add warnings if we fall off the end of the functions; duplicating the code to avoid the undesirable effect of the macros doesn't seem appealing; separating the checking from alternate no-error-checking core/pure (invisible?) intrinsics in e.g. folding of non-const/pure (user-callable) intrinsics seems ugly and risky. So I propose dropping the pure/const attribute from the intrinsics and builtin declarations, so that gimple passes won't optimize them away. After expand (when errors are detected and reported), we get plain insns rather than calls, and those are dropped if the outputs are unused. It's not ideal, it could be improved, but it's safe enough for this stage. Regstrapped on x86_64-linux-gnu, along with other patches; also tested on aarch64-elf with gcc-13. This addresses the issue first reported at . Ok to install? for gcc/ChangeLog * config/aarch64/aarch64-builtins.cc (aarch64_get_attributes): Add lane_check parm, to rule out pure and const. (aarch64_init_simd_intrinsics): Pass lane_check if any arg has lane index qualifiers. (aarch64_init_simd_builtin_functions): Likewise. --- gcc/config/aarch64/aarch64-builtins.cc | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/gcc/config/aarch64/aarch64-builtins.cc b/gcc/config/aarch64/aarch64-builtins.cc index 9b23b6b8c33f1..1268deea28e6c 100644 --- a/gcc/config/aarch64/aarch64-builtins.cc +++ b/gcc/config/aarch64/aarch64-builtins.cc @@ -1258,11 +1258,12 @@ aarch64_add_attribute (const char *name, tree attrs) /* Return the appropriate attributes for a function that has flags F and mode MODE. */ static tree -aarch64_get_attributes (unsigned int f, machine_mode mode) +aarch64_get_attributes (unsigned int f, machine_mode mode, + bool lane_check = false) { tree attrs = NULL_TREE; - if (!aarch64_modifies_global_state_p (f, mode)) + if (!lane_check && !aarch64_modifies_global_state_p (f, mode)) { if (aarch64_reads_global_state_p (f, mode)) attrs = aarch64_add_attribute ("pure", attrs); @@ -1318,6 +1319,7 @@ aarch64_init_simd_intrinsics (void) tree return_type = void_type_node; tree args = void_list_node; + bool lane_check = false; for (int op_num = d->op_count - 1; op_num >= 0; op_num--) { @@ -1330,10 +1332,17 @@ aarch64_init_simd_intrinsics (void) return_type = eltype; else args = tree_cons (NULL_TREE, eltype, args); + + if (qualifiers & (qualifier_lane_index + | qualifier_struct_load_store_lane_index + | qualifier_lane_pair_index + | qualifier_lane_quadtup_index)) + lane_check = true; } tree ftype = build_function_type (return_type, args); - tree attrs = aarch64_get_attributes (d->flags, d->op_modes[0]); + tree attrs = aarch64_get_attributes (d->flags, d->op_modes[0], + lane_check); unsigned int code = (d->fcode << AARCH64_BUILTIN_SHIFT | AARCH64_BUILTIN_GENERAL); tree fndecl = simulate_builtin_function_decl (input_location, d->name, @@ -1400,6 +1409,7 @@ aarch64_init_simd_builtin_functions (bool called_from_pragma) || (!called_from_pragma && struct_mode_args > 0)) continue; + bool lane_check = false; /* Build a function type directly from the insn_data for this builtin. The build_function_type () function takes care of removing duplicates for us. */ @@ -1435,6 +1445,12 @@ aarch64_init_simd_builtin_functions (bool called_from_pragma) return_type = eltype; else args = tree_cons (NULL_TREE, eltype, args); + + if (qualifiers & (qualifier_lane_index + | qualifier_struct_load_store_lane_index + | qualifier_lane_pair_index + | qualifier_lane_quadtup_index)) + lane_check = true; } ftype = build_function_type (return_type, args); @@ -1448,7 +1464,7 @@ aarch64_init_simd_builtin_functions (bool called_from_pragma) snprintf (namebuf, sizeof (namebuf), "__builtin_aarch64_%s", d->name); - tree attrs = aarch64_get_attributes (d->flags, d->mode); + tree attrs = aarch64_get_attributes (d->flags, d->mode, lane_check); if (called_from_pragma) {