From patchwork Fri Dec 9 13:56:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sebastian Huber X-Patchwork-Id: 61729 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 0B7813872208 for ; Fri, 9 Dec 2022 13:56:33 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from dedi548.your-server.de (dedi548.your-server.de [85.10.215.148]) by sourceware.org (Postfix) with ESMTPS id C01B33840107 for ; Fri, 9 Dec 2022 13:56:14 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org C01B33840107 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=embedded-brains.de Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=embedded-brains.de Received: from sslproxy02.your-server.de ([78.47.166.47]) by dedi548.your-server.de with esmtpsa (TLS1.3) tls TLS_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1p3drI-0008iu-NG for gcc-patches@gcc.gnu.org; Fri, 09 Dec 2022 14:56:12 +0100 Received: from [82.100.198.138] (helo=mail.embedded-brains.de) by sslproxy02.your-server.de with esmtpsa (TLSv1.3:TLS_AES_256_GCM_SHA384:256) (Exim 4.92) (envelope-from ) id 1p3drI-000DfU-Kp for gcc-patches@gcc.gnu.org; Fri, 09 Dec 2022 14:56:12 +0100 Received: from localhost (localhost [127.0.0.1]) by mail.embedded-brains.de (Postfix) with ESMTP id 5A7BE480028 for ; Fri, 9 Dec 2022 14:56:12 +0100 (CET) Received: from mail.embedded-brains.de ([127.0.0.1]) by localhost (zimbra.eb.localhost [127.0.0.1]) (amavisd-new, port 10032) with ESMTP id XtXkFy_f7mdv for ; Fri, 9 Dec 2022 14:56:12 +0100 (CET) Received: from localhost (localhost [127.0.0.1]) by mail.embedded-brains.de (Postfix) with ESMTP id E81664800AC for ; Fri, 9 Dec 2022 14:56:11 +0100 (CET) X-Virus-Scanned: amavisd-new at zimbra.eb.localhost Received: from mail.embedded-brains.de ([127.0.0.1]) by localhost (zimbra.eb.localhost [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id zvPFgphacq-E for ; Fri, 9 Dec 2022 14:56:11 +0100 (CET) Received: from zimbra.eb.localhost (unknown [192.168.96.242]) by mail.embedded-brains.de (Postfix) with ESMTPSA id C8CE0480028 for ; Fri, 9 Dec 2022 14:56:11 +0100 (CET) From: Sebastian Huber To: gcc-patches@gcc.gnu.org Subject: [PATCH] gcov: Fix -fprofile-update=atomic Date: Fri, 9 Dec 2022 14:56:09 +0100 Message-Id: <20221209135609.55159-1-sebastian.huber@embedded-brains.de> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 X-Authenticated-Sender: smtp-embedded@poldinet.de X-Virus-Scanned: Clear (ClamAV 0.103.7/26745/Fri Dec 9 12:50:19 2022) X-Spam-Status: No, score=-11.4 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" The code coverage support uses counters to determine which edges in the control flow graph were executed. If a counter overflows, then the code coverage information is invalid. Therefore the counter type should be a 64-bit integer. In multithreaded applications, it is important that the counter increments are atomic. This is not the case by default. The user can enable atomic counter increments through the -fprofile-update=atomic and -fprofile-update=prefer-atomic options. If the hardware supports 64-bit atomic operations, then everything is fine. If not and -fprofile-update=prefer-atomic was chosen by the user, then non-atomic counter increments will be used. However, if the hardware does not support the required atomic operations and -fprofile-atomic=update was chosen by the user, then a warning was issued and as a forced fall-back to non-atomic operations was done. This is probably not what a user wants. There is still hardware on the market which does not have atomic operations and is used for multithreaded applications. A user which selects -fprofile-update=atomic wants consistent code coverage data and not random data. This patch removes the fall-back to non-atomic operations for -fprofile-update=atomic. If atomic operations in hardware are not available, then a library call to libatomic is emitted. To mitigate potential performance issues an optimization for systems which only support 32-bit atomic operations is provided. Here, the edge counter increments are done like this: low = __atomic_add_fetch_4 (&counter.low, 1, MEMMODEL_RELAXED); high_inc = low == 0 ? 1 : 0; __atomic_add_fetch_4 (&counter.high, high_inc, MEMMODEL_RELAXED); gcc/ChangeLog: * tree-profile.cc (split_atomic_increment): New. (gimple_gen_edge_profiler): Split the atomic edge counter increment in two 32-bit atomic operations if necessary. (tree_profiling): Remove profile update warning and fall-back. Set split_atomic_increment if necessary. --- gcc/tree-profile.cc | 81 +++++++++++++++++++++++++++++++++------------ 1 file changed, 59 insertions(+), 22 deletions(-) diff --git a/gcc/tree-profile.cc b/gcc/tree-profile.cc index 2beb49241f2..1d326dde59a 100644 --- a/gcc/tree-profile.cc +++ b/gcc/tree-profile.cc @@ -73,6 +73,17 @@ static GTY(()) tree ic_tuple_var; static GTY(()) tree ic_tuple_counters_field; static GTY(()) tree ic_tuple_callee_field; +/* If the user selected atomic profile counter updates + (-fprofile-update=atomic), then the counter updates will be done atomically. + Ideally, this is done through atomic operations in hardware. If the + hardware supports only 32-bit atomic increments and gcov_type_node is a + 64-bit integer type, then for the profile edge counters the increment is + performed through two separate 32-bit atomic increments. This case is + indicated by the split_atomic_increment variable begin true. If the + hardware does not support atomic operations at all, then a library call to + libatomic is emitted. */ +static bool split_atomic_increment; + /* Do initialization work for the edge profiler. */ /* Add code: @@ -242,30 +253,59 @@ gimple_init_gcov_profiler (void) void gimple_gen_edge_profiler (int edgeno, edge e) { - tree one; - - one = build_int_cst (gcov_type_node, 1); + const char *name = "PROF_edge_counter"; + tree ref = tree_coverage_counter_ref (GCOV_COUNTER_ARCS, edgeno); + tree one = build_int_cst (gcov_type_node, 1); if (flag_profile_update == PROFILE_UPDATE_ATOMIC) { - /* __atomic_fetch_add (&counter, 1, MEMMODEL_RELAXED); */ - tree addr = tree_coverage_counter_addr (GCOV_COUNTER_ARCS, edgeno); - tree f = builtin_decl_explicit (TYPE_PRECISION (gcov_type_node) > 32 - ? BUILT_IN_ATOMIC_FETCH_ADD_8: - BUILT_IN_ATOMIC_FETCH_ADD_4); - gcall *stmt = gimple_build_call (f, 3, addr, one, - build_int_cst (integer_type_node, - MEMMODEL_RELAXED)); - gsi_insert_on_edge (e, stmt); + tree addr = build_fold_addr_expr (ref); + tree relaxed = build_int_cst (integer_type_node, MEMMODEL_RELAXED); + if (!split_atomic_increment) + { + /* __atomic_fetch_add (&counter, 1, MEMMODEL_RELAXED); */ + tree f = builtin_decl_explicit (TYPE_PRECISION (gcov_type_node) > 32 + ? BUILT_IN_ATOMIC_FETCH_ADD_8: + BUILT_IN_ATOMIC_FETCH_ADD_4); + gcall *stmt = gimple_build_call (f, 3, addr, one, relaxed); + gsi_insert_on_edge (e, stmt); + } + else + { + /* low = __atomic_add_fetch_4 (addr, 1, MEMMODEL_RELAXED); + high_inc = low == 0 ? 1 : 0; + __atomic_add_fetch_4 (addr_high, high_inc, MEMMODEL_RELAXED); */ + tree zero32 = build_zero_cst (uint32_type_node); + tree one32 = build_one_cst (uint32_type_node); + tree addr_high = make_temp_ssa_name (TREE_TYPE (addr), NULL, name); + gimple *stmt = gimple_build_assign (addr_high, POINTER_PLUS_EXPR, + addr, + build_int_cst (size_type_node, + 4)); + gsi_insert_on_edge (e, stmt); + if (WORDS_BIG_ENDIAN) + std::swap (addr, addr_high); + tree f = builtin_decl_explicit (BUILT_IN_ATOMIC_ADD_FETCH_4); + stmt = gimple_build_call (f, 3, addr, one, relaxed); + tree low = make_temp_ssa_name (uint32_type_node, NULL, name); + gimple_call_set_lhs (stmt, low); + gsi_insert_on_edge (e, stmt); + tree is_zero = make_temp_ssa_name (boolean_type_node, NULL, name); + stmt = gimple_build_assign (is_zero, EQ_EXPR, low, zero32); + gsi_insert_on_edge (e, stmt); + tree high_inc = make_temp_ssa_name (uint32_type_node, NULL, name); + stmt = gimple_build_assign (high_inc, COND_EXPR, is_zero, one32, + zero32); + gsi_insert_on_edge (e, stmt); + stmt = gimple_build_call (f, 3, addr_high, high_inc, relaxed); + gsi_insert_on_edge (e, stmt); + } } else { - tree ref = tree_coverage_counter_ref (GCOV_COUNTER_ARCS, edgeno); - tree gcov_type_tmp_var = make_temp_ssa_name (gcov_type_node, - NULL, "PROF_edge_counter"); + tree gcov_type_tmp_var = make_temp_ssa_name (gcov_type_node, NULL, name); gassign *stmt1 = gimple_build_assign (gcov_type_tmp_var, ref); - gcov_type_tmp_var = make_temp_ssa_name (gcov_type_node, - NULL, "PROF_edge_counter"); + gcov_type_tmp_var = make_temp_ssa_name (gcov_type_node, NULL, name); gassign *stmt2 = gimple_build_assign (gcov_type_tmp_var, PLUS_EXPR, gimple_assign_lhs (stmt1), one); gassign *stmt3 = gimple_build_assign (unshare_expr (ref), @@ -710,11 +750,8 @@ tree_profiling (void) if (flag_profile_update == PROFILE_UPDATE_ATOMIC && !can_support_atomic) - { - warning (0, "target does not support atomic profile update, " - "single mode is selected"); - flag_profile_update = PROFILE_UPDATE_SINGLE; - } + split_atomic_increment = HAVE_sync_compare_and_swapsi + || HAVE_atomic_compare_and_swapsi; else if (flag_profile_update == PROFILE_UPDATE_PREFER_ATOMIC) flag_profile_update = can_support_atomic ? PROFILE_UPDATE_ATOMIC : PROFILE_UPDATE_SINGLE;