Message ID | 20211110184153.2269857-1-hjl.tools@gmail.com |
---|---|
Headers |
Return-Path: <libc-alpha-bounces+patchwork=sourceware.org@sourceware.org> X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 5333B385780A for <patchwork@sourceware.org>; Wed, 10 Nov 2021 18:44:24 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 5333B385780A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1636569864; bh=pgZBpT64yP+B50WAhILnzhI4oNohUQAP7wbSQO1iDwU=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=klRl7hpSWC6RK+jxDTpXYvGU00hJx5PuUSGXY5HRIgfRRCEr2+4YrKZBE1ejTXZgy tDzFE7xPGdyvqncM9s9rtZskqKIOVi7jkh8MjQF4yGqDOR2oahA80OW74X2giGWVxH JCv4qnUQopCwy0bro9rSqkABgCnlWU/wLHSlga18= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pf1-x42a.google.com (mail-pf1-x42a.google.com [IPv6:2607:f8b0:4864:20::42a]) by sourceware.org (Postfix) with ESMTPS id 86CA53858400 for <libc-alpha@sourceware.org>; Wed, 10 Nov 2021 18:41:57 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 86CA53858400 Received: by mail-pf1-x42a.google.com with SMTP id y5so3460143pfb.4 for <libc-alpha@sourceware.org>; Wed, 10 Nov 2021 10:41:57 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=pgZBpT64yP+B50WAhILnzhI4oNohUQAP7wbSQO1iDwU=; b=oj4NMMlknopXDoGITsgiVWN3UNBbV7YThm8b4BhlZMKMYXD1wSZKnbB54lwyE9PuA5 nL0i/01KY11KPov1Z6Tvcf7WL591i4fdEpZNq5tgstD8+xWAt/jrAeDfUQ8X8xk/7OJF dD41cAX5u/da92dWQ02oDO1/gnZQF6PiNoBNp80zakDHRFgNSJlYTFjT8u1y+UJbRXSX PQRLcNy00bq4RpSb4ojajW9l5Sjr7ojZ7FkeDQifpzrrr4rSDyfT5XHZH+/ANftg+Jjf QqhEbdUIQSbPjiEgivClQgdOQ8ucMnK1hdJAfveNvyAHZh3eFJcD8mHPTUiEu5v6pkpy Unkg== X-Gm-Message-State: AOAM533e0UX0Rgg/kyS7MCkveO//z8Ku7C47HxSkviWHmhw589nZsmTk t88FRsZtAdHvQU5SgImnChmxHq8jrN4= X-Google-Smtp-Source: ABdhPJwDSvZtLrTAXiHaQOKChH5HjhfYR2xGom6Xem7B36ZfQOckTwsPCC3S7j5NIJg3CKeSAL0onw== X-Received: by 2002:a63:384:: with SMTP id 126mr634732pgd.33.1636569716587; Wed, 10 Nov 2021 10:41:56 -0800 (PST) Received: from gnu-cfl-2.localdomain ([172.58.35.133]) by smtp.gmail.com with ESMTPSA id u13sm240043pga.92.2021.11.10.10.41.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 10 Nov 2021 10:41:54 -0800 (PST) Received: from gnu-cfl-2.lan (localhost [IPv6:::1]) by gnu-cfl-2.localdomain (Postfix) with ESMTP id 3A27B1A0240; Wed, 10 Nov 2021 10:41:53 -0800 (PST) To: libc-alpha@sourceware.org Subject: [PATCH v5 0/3] Optimize CAS [BZ #28537] Date: Wed, 10 Nov 2021 10:41:50 -0800 Message-Id: <20211110184153.2269857-1-hjl.tools@gmail.com> X-Mailer: git-send-email 2.33.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Spam-Status: No, score=-3022.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, RCVD_IN_BARRACUDACENTRAL, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=no autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list <libc-alpha.sourceware.org> List-Unsubscribe: <https://sourceware.org/mailman/options/libc-alpha>, <mailto:libc-alpha-request@sourceware.org?subject=unsubscribe> List-Archive: <https://sourceware.org/pipermail/libc-alpha/> List-Post: <mailto:libc-alpha@sourceware.org> List-Help: <mailto:libc-alpha-request@sourceware.org?subject=help> List-Subscribe: <https://sourceware.org/mailman/listinfo/libc-alpha>, <mailto:libc-alpha-request@sourceware.org?subject=subscribe> From: "H.J. Lu via Libc-alpha" <libc-alpha@sourceware.org> Reply-To: "H.J. Lu" <hjl.tools@gmail.com> Cc: Florian Weimer <fweimer@redhat.com>, Andreas Schwab <schwab@linux-m68k.org>, "Paul A . Clarke" <pc@us.ibm.com>, Arjan van de Ven <arjan@linux.intel.com> Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" <libc-alpha-bounces+patchwork=sourceware.org@sourceware.org> |
Series | Optimize CAS [BZ #28537] | |
Message
H.J. Lu
Nov. 10, 2021, 6:41 p.m. UTC
Changes in v5: 1. Put back __glibc_unlikely in __lll_trylock and lll_cond_trylock. 2. Remove an atomic load in a CAS usage which has been already optimized. 3. Add an empty statement with a semicolon to a goto label for older compiler versions. 4. Simplify CAS optimization. CAS instruction is expensive. From the x86 CPU's point of view, getting a cache line for writing is more expensive than reading. See Appendix A.2 Spinlock in: https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-lock-scaling-analysis-paper.pdf The full compare and swap will grab the cache line exclusive and cause excessive cache line bouncing. Optimize CAS in low level locks and pthread_mutex_lock.c: 1. Do an atomic load and skip CAS if compare may fail to reduce cache line bouncing on contended locks. 2. Replace atomic_compare_and_exchange_bool_acq with atomic_compare_and_exchange_val_acq to avoid the extra load. This is the first patch set to optimize CAS. I will submit the rest CAS optimizations in glibc after this patch set has been accepted. With all CAS optimizations applied, on a machine with 112 cores, "make check -j28" under heavy load took 3093.18user 1644.12system 22:26.05elapsed 351%CPU vs without CAS optimizations 3746.07user 1614.93system 22:02.91elapsed 405%CPU H.J. Lu (3): Reduce CAS in low level locks [BZ #28537] Reduce CAS in __pthread_mutex_lock_full [BZ #28537] Optimize CAS in __pthread_mutex_lock_full [BZ #28537] nptl/lowlevellock.c | 12 ++++----- nptl/pthread_mutex_lock.c | 49 +++++++++++++++++++++++-------------- sysdeps/nptl/lowlevellock.h | 33 +++++++++++++++++-------- 3 files changed, 60 insertions(+), 34 deletions(-)
Comments
On Wed, Nov 10, 2021 at 10:41:50AM -0800, H.J. Lu wrote: > Changes in v5: > > 1. Put back __glibc_unlikely in __lll_trylock and lll_cond_trylock. > 2. Remove an atomic load in a CAS usage which has been already optimized. > 3. Add an empty statement with a semicolon to a goto label for older > compiler versions. > 4. Simplify CAS optimization. > > CAS instruction is expensive. From the x86 CPU's point of view, getting > a cache line for writing is more expensive than reading. See Appendix > A.2 Spinlock in: > > https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-lock-scaling-analysis-paper.pdf > > The full compare and swap will grab the cache line exclusive and cause > excessive cache line bouncing. > > Optimize CAS in low level locks and pthread_mutex_lock.c: > > 1. Do an atomic load and skip CAS if compare may fail to reduce cache > line bouncing on contended locks. > 2. Replace atomic_compare_and_exchange_bool_acq with > atomic_compare_and_exchange_val_acq to avoid the extra load. > > This is the first patch set to optimize CAS. I will submit the rest > CAS optimizations in glibc after this patch set has been accepted. > > With all CAS optimizations applied, on a machine with 112 cores, > "make check -j28" under heavy load took > > 3093.18user 1644.12system 22:26.05elapsed 351%CPU > > vs without CAS optimizations > > 3746.07user 1614.93system 22:02.91elapsed 405%CPU I read that as about 2% slower with your changes. Is that the desired result? PC
On Wed, Nov 10, 2021 at 12:23 PM Paul A. Clarke <pc@us.ibm.com> wrote: > > On Wed, Nov 10, 2021 at 10:41:50AM -0800, H.J. Lu wrote: > > Changes in v5: > > > > 1. Put back __glibc_unlikely in __lll_trylock and lll_cond_trylock. > > 2. Remove an atomic load in a CAS usage which has been already optimized. > > 3. Add an empty statement with a semicolon to a goto label for older > > compiler versions. > > 4. Simplify CAS optimization. > > > > CAS instruction is expensive. From the x86 CPU's point of view, getting > > a cache line for writing is more expensive than reading. See Appendix > > A.2 Spinlock in: > > > > https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-lock-scaling-analysis-paper.pdf > > > > The full compare and swap will grab the cache line exclusive and cause > > excessive cache line bouncing. > > > > Optimize CAS in low level locks and pthread_mutex_lock.c: > > > > 1. Do an atomic load and skip CAS if compare may fail to reduce cache > > line bouncing on contended locks. > > 2. Replace atomic_compare_and_exchange_bool_acq with > > atomic_compare_and_exchange_val_acq to avoid the extra load. > > > > This is the first patch set to optimize CAS. I will submit the rest > > CAS optimizations in glibc after this patch set has been accepted. > > > > With all CAS optimizations applied, on a machine with 112 cores, > > "make check -j28" under heavy load took > > > > 3093.18user 1644.12system 22:26.05elapsed 351%CPU > > > > vs without CAS optimizations > > > > 3746.07user 1614.93system 22:02.91elapsed 405%CPU > > I read that as about 2% slower with your changes. Is that the desired result? > The machine was under heavy load. My reading is with CAS optimization, it takes less CPU cycles and reduces CPU utilization by 13%. It saves power and gives CPU cycles to other tasks.