From patchwork Sun Nov 14 00:59:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hongyu Wang X-Patchwork-Id: 47629 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id AECFB3858036 for ; Sun, 14 Nov 2021 01:00:20 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org AECFB3858036 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1636851620; bh=INY+5pe2vEipgj/dGCX38S4TfmzZw0nvzjeXczwAaUI=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=nfsf/BtTylWQpKYoC//AVcqpqfZmfM1Cp30nq+5bKt+37gjrfUyjmF71a/hcoUrzN nlSFbVHoKYVqLRB6/uWv+wDIOQnCeDMyBrHxct7iTGzXcg6g6GvGK/5EkMEQ1uu6du +qfJhZuFbg6N/5qRrNR23fl521StmKLW3lcurIKY= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by sourceware.org (Postfix) with ESMTPS id A4B4A3858D3C for ; Sun, 14 Nov 2021 00:59:48 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org A4B4A3858D3C X-IronPort-AV: E=McAfee;i="6200,9189,10167"; a="233536881" X-IronPort-AV: E=Sophos;i="5.87,233,1631602800"; d="scan'208";a="233536881" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2021 16:59:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,233,1631602800"; d="scan'208";a="603427508" Received: from scymds01.sc.intel.com ([10.148.94.138]) by orsmga004.jf.intel.com with ESMTP; 13 Nov 2021 16:59:46 -0800 Received: from shliclel320.sh.intel.com (shliclel320.sh.intel.com [10.239.236.50]) by scymds01.sc.intel.com with ESMTP id 1AE0xjko015881; Sat, 13 Nov 2021 16:59:45 -0800 To: jakub@redhat.com Subject: [PATCH] PR libgomp/103068: Optimize gomp_mutex_lock_slow for x86 target Date: Sun, 14 Nov 2021 08:59:44 +0800 Message-Id: <20211114005944.66759-1-hongyu.wang@intel.com> X-Mailer: git-send-email 2.18.1 X-Spam-Status: No, score=-10.2 required=5.0 tests=BAYES_00, FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM, GIT_PATCH_0, HEADER_FROM_DIFFERENT_DOMAINS, KAM_DMARC_NONE, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_SOFTFAIL, SPOOFED_FREEMAIL, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Hongyu Wang via Gcc-patches From: Hongyu Wang Reply-To: Hongyu Wang Cc: gcc-patches@gcc.gnu.org Errors-To: gcc-patches-bounces+patchwork=sourceware.org@gcc.gnu.org Sender: "Gcc-patches" Hi, From the CPU's point of view, getting a cache line for writing is more expensive than reading. See Appendix A.2 Spinlock in: https://www.intel.com/content/dam/www/public/us/en/documents/white-papers /xeon-lock-scaling-analysis-paper.pdf The full compare and swap will grab the cache line exclusive and causes excessive cache line bouncing. For gomp_mutex_lock_slow, it spins on __atomic_compare_exchange_n, so add load-check to continue spin if cmpxchg may fail. Bootstrapped/regtested on x86_64-pc-linux-gnu{-m32,}. Ok for master? libgomp/ChangeLog: PR libgomp/103068 * config/linux/mutex.c (gomp_mutex_lock_slow): Continue spin loop when mutex is not 0 under x86 target. * config/linux/x86/futex.h (TARGET_X86_AVOID_CMPXCHG): Define. --- libgomp/config/linux/mutex.c | 5 +++++ libgomp/config/linux/x86/futex.h | 2 ++ 2 files changed, 7 insertions(+) diff --git a/libgomp/config/linux/mutex.c b/libgomp/config/linux/mutex.c index 838264dc1f9..4e87566eb2b 100644 --- a/libgomp/config/linux/mutex.c +++ b/libgomp/config/linux/mutex.c @@ -49,6 +49,11 @@ gomp_mutex_lock_slow (gomp_mutex_t *mutex, int oldval) } else { +#ifdef TARGET_X86_AVOID_CMPXCHG + /* For x86, omit cmpxchg when atomic load shows mutex is not 0. */ + if ((oldval = __atomic_load_n (mutex, MEMMODEL_RELAXED)) != 0) + continue; +#endif /* Something changed. If now unlocked, we're good to go. */ oldval = 0; if (__atomic_compare_exchange_n (mutex, &oldval, 1, false, diff --git a/libgomp/config/linux/x86/futex.h b/libgomp/config/linux/x86/futex.h index e7f53399a4e..acc1d1467d7 100644 --- a/libgomp/config/linux/x86/futex.h +++ b/libgomp/config/linux/x86/futex.h @@ -122,3 +122,5 @@ cpu_relax (void) { __builtin_ia32_pause (); } + +#define TARGET_X86_AVOID_CMPXCHG