From patchwork Thu Nov 11 16:24:27 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "H.J. Lu" X-Patchwork-Id: 47487 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 33FD13857C74 for ; Thu, 11 Nov 2021 16:27:41 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 33FD13857C74 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1636648061; bh=CHbi8Gkak0OQo+95r7WhGr9oXHBLK4yWRFpzZkOBt1o=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=sxPreStRJ09HgPzKC0aKLFmUBCyM1ZP5t+VL+82g6B5OwEU4Z39GPaub9GGWVEoEI hGgb/FNsIpHlH4xnuMAMU4i/TrRMmTsc4QA8YGMRpzYCSw4Nvu9WUqaRPPTItSSVT0 Hi7HaDBVg8o3uudtItamtA6seMOHi5xgibu4bJ3A= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pl1-x632.google.com (mail-pl1-x632.google.com [IPv6:2607:f8b0:4864:20::632]) by sourceware.org (Postfix) with ESMTPS id C3B1F385841A for ; Thu, 11 Nov 2021 16:24:31 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org C3B1F385841A Received: by mail-pl1-x632.google.com with SMTP id n8so6174708plf.4 for ; Thu, 11 Nov 2021 08:24:31 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=CHbi8Gkak0OQo+95r7WhGr9oXHBLK4yWRFpzZkOBt1o=; b=MhORD/VdHns7dLZV7KjaEn37Or1QJnORYIjuPISPQiQSspG1FRQQfU7u4RrDGaOOS6 EB4RACAHXNPHKdRMTFx7X2FAGxKfzhrd7U+1r+wkKwjoIDsaqox7Sk1slqjDXntJ20nt PtXQMWXpL6AhghmxAGILZLgEbvem1LCPyOXCSmYvBDTgx7TIwQppGn0w+PQTQlBOz+N1 Kr8jWK4hTb6Up1hcJzODCOXQbqdbOboftoKSb8udX27y0T/TU3pDJRU9Vo/2L7hK6lRl orDRUvinEgnUOYngCw9CNpitYj/pzHbKC+Fn3sxa+rBB4QrPiaOzAoHooHoDiAWZ7Cj6 ek6A== X-Gm-Message-State: AOAM532vLZPgpGrA6ug9uW0YAdpEaetFEqNs5Csuf6mDtmCEWq3eIUYq SFHm/hGBet9NWJu+G11ataM= X-Google-Smtp-Source: ABdhPJzO4bX0ClTSghe51GDQbgbBtoqllj6pGOc1TQLdVnG6hN1g5gdjKJwbLmpxK3pAWN5oml0R4Q== X-Received: by 2002:a17:902:d88b:b0:142:8acf:615b with SMTP id b11-20020a170902d88b00b001428acf615bmr148604plz.62.1636647870892; Thu, 11 Nov 2021 08:24:30 -0800 (PST) Received: from gnu-cfl-2.localdomain ([172.58.35.133]) by smtp.gmail.com with ESMTPSA id s25sm3358424pfm.138.2021.11.11.08.24.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 11 Nov 2021 08:24:29 -0800 (PST) Received: from gnu-cfl-2.lan (localhost [IPv6:::1]) by gnu-cfl-2.localdomain (Postfix) with ESMTP id 033201A0EC0; Thu, 11 Nov 2021 08:24:29 -0800 (PST) To: libc-alpha@sourceware.org Subject: [PATCH v6 3/4] Reduce CAS in malloc spinlocks Date: Thu, 11 Nov 2021 08:24:27 -0800 Message-Id: <20211111162428.2286605-4-hjl.tools@gmail.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20211111162428.2286605-1-hjl.tools@gmail.com> References: <20211111162428.2286605-1-hjl.tools@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-3029.9 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_BARRACUDACENTRAL, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: "H.J. Lu via Libc-alpha" From: "H.J. Lu" Reply-To: "H.J. Lu" Cc: Florian Weimer , Andreas Schwab , "Paul A . Clarke" , Arjan van de Ven Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" Do an atomic load and check if compare may fail. Skip CAS and spin if compare may fail to reduce cache line bouncing on contended locks. --- malloc/arena.c | 5 +++++ malloc/malloc.c | 10 ++++++++++ 2 files changed, 15 insertions(+) diff --git a/malloc/arena.c b/malloc/arena.c index 78ef4cf18c..e7fbe7c183 100644 --- a/malloc/arena.c +++ b/malloc/arena.c @@ -899,6 +899,11 @@ arena_get2 (size_t size, mstate avoid_arena) enough address space to create that many arenas. */ if (__glibc_unlikely (n <= narenas_limit - 1)) { + if (atomic_load_relaxed (&narenas) != n) + { + atomic_spin_nop (); + goto repeat; + } if (catomic_compare_and_exchange_bool_acq (&narenas, n + 1, n)) goto repeat; a = _int_new_arena (size); diff --git a/malloc/malloc.c b/malloc/malloc.c index 095d97a3be..403ffb84ef 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -3717,6 +3717,11 @@ _int_malloc (mstate av, size_t bytes) pp = REVEAL_PTR (victim->fd); \ if (__glibc_unlikely (pp != NULL && misaligned_chunk (pp))) \ malloc_printerr ("malloc(): unaligned fastbin chunk detected"); \ + if (atomic_load_relaxed (fb) != victim) \ + { \ + atomic_spin_nop (); \ + continue; \ + } \ } \ while ((pp = catomic_compare_and_exchange_val_acq (fb, pp, victim)) \ != victim); \ @@ -4435,6 +4440,11 @@ _int_free (mstate av, mchunkptr p, int have_lock) malloc_printerr ("double free or corruption (fasttop)"); old2 = old; p->fd = PROTECT_PTR (&p->fd, old); + if (atomic_load_relaxed (fb) != old2) + { + atomic_spin_nop (); + continue; + } } while ((old = catomic_compare_and_exchange_val_rel (fb, p, old2)) != old2);