From patchwork Sat Jan 28 02:18:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: develop--- via Libc-alpha X-Patchwork-Id: 63836 X-Patchwork-Delegate: carlos@redhat.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B6CE938555A2 for ; Sat, 28 Jan 2023 02:20:05 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org B6CE938555A2 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1674872405; bh=+cS1p8YiESMDiLaiHdrkpobut8hPwzWxTI6Ef3x17Os=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=oTvhTNrkoviby44yVXFAtcJ160UasJ1Xvc44ZBFbH4hCg9YbITZTEDRPhXh0kCbWk zSKLrlxJfHDcFXTztJZCJ2NndgpXucEvzUvfTMSWCrtvlO9hnQczjAnr4M13G5tmqM C3E9ikg+4lC1mDZlAfgvTQivfOBAjQCWi5VXcYK0= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from wout4-smtp.messagingengine.com (wout4-smtp.messagingengine.com [64.147.123.20]) by sourceware.org (Postfix) with ESMTPS id 628E53858C2D for ; Sat, 28 Jan 2023 02:18:47 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 628E53858C2D Received: from compute1.internal (compute1.nyi.internal [10.202.2.41]) by mailout.west.internal (Postfix) with ESMTP id 44D0E320090C; Fri, 27 Jan 2023 21:18:46 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute1.internal (MEProxy); Fri, 27 Jan 2023 21:18:46 -0500 X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvhedruddvjedggeehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvvefufffkofgjfhgggfestdekredtredttdenucfhrhhomhepmhgrlhht vghskhgrrhhuphhkvgesfhgrshhtmhgrihhlrdhfmhenucggtffrrghtthgvrhhnpeetge elgfeggeeuleeuffetveefgffgjedvgeehffdthfekteegtdeguefhffeftdenucevlhhu shhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghlthgvshhkrg hruhhpkhgvsehfrghsthhmrghilhdrfhhm X-ME-Proxy: Feedback-ID: ifa6c408f:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri, 27 Jan 2023 21:18:45 -0500 (EST) To: libc-alpha@sourceware.org Cc: Malte Skarupke Subject: [PATCH 4/9] nptl: Remove unnecessary quadruple check in pthread_cond_wait Date: Fri, 27 Jan 2023 21:18:24 -0500 Message-Id: <20230128021829.7990-5-malteskarupke@fastmail.fm> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230128021829.7990-1-malteskarupke@fastmail.fm> References: <20230128021829.7990-1-malteskarupke@fastmail.fm> MIME-Version: 1.0 X-Spam-Status: No, score=-13.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, SPF_HELO_PASS, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: malteskarupke--- via Libc-alpha From: develop--- via Libc-alpha Reply-To: malteskarupke@fastmail.fm Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" From: Malte Skarupke pthread_cond_wait was checking whether it was in a closed group no less than four times. Checking once is enough. Here are the four checks: 1. While spin-waiting. This was dead code: maxspin is set to 0 and has been for years. 2. Before deciding to go to sleep, and before incrementing grefs: I kept this 3. After incrementing grefs. There is no reason to think that the group would close while we do an atomic increment. Obviously it could close at any point, but that doesn't mean we have to recheck after every step. This check was equally good as check 2, except it has to do more work. 4. When we find ourselves in a group that has a signal. We only get here after we check that we're not in a closed group. There is no need to check again. The check would only have helped in cases where the compare_exchange in the next line would also have failed. Relying on the compare_exchange is fine. Removing the duplicate checks clarifies the code. --- nptl/pthread_cond_wait.c | 49 ---------------------------------------- 1 file changed, 49 deletions(-) diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c index cee1968756..47e834cade 100644 --- a/nptl/pthread_cond_wait.c +++ b/nptl/pthread_cond_wait.c @@ -366,7 +366,6 @@ static __always_inline int __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, clockid_t clockid, const struct __timespec64 *abstime) { - const int maxspin = 0; int err; int result = 0; @@ -425,33 +424,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); unsigned int lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - /* Spin-wait first. - Note that spinning first without checking whether a timeout - passed might lead to what looks like a spurious wake-up even - though we should return ETIMEDOUT (e.g., if the caller provides - an absolute timeout that is clearly in the past). However, - (1) spurious wake-ups are allowed, (2) it seems unlikely that a - user will (ab)use pthread_cond_wait as a check for whether a - point in time is in the past, and (3) spinning first without - having to compare against the current time seems to be the right - choice from a performance perspective for most use cases. */ - unsigned int spin = maxspin; - while (spin > 0 && ((int)(signals - lowseq) < 2)) - { - /* Check that we are not spinning on a group that's already - closed. */ - if (seq < (g1_start >> 1)) - break; - - /* TODO Back off. */ - - /* Reload signals. See above for MO. */ - signals = atomic_load_acquire (cond->__data.__g_signals + g); - g1_start = __condvar_load_g1_start_relaxed (cond); - lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - spin--; - } - if (seq < (g1_start >> 1)) { /* If the group is closed already, @@ -482,24 +454,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, an atomic read-modify-write operation and thus extend the release sequence. */ atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); - signals = atomic_load_acquire (cond->__data.__g_signals + g); - g1_start = __condvar_load_g1_start_relaxed (cond); - lowseq = (g1_start & 1) == g ? signals : g1_start & ~1U; - - if (seq < (g1_start >> 1)) - { - /* group is closed already, so don't block */ - __condvar_dec_grefs (cond, g, private); - goto done; - } - - if ((int)(signals - lowseq) >= 2) - { - /* a signal showed up or G1/G2 switched after we grabbed the - refcount */ - __condvar_dec_grefs (cond, g, private); - break; - } // Now block. struct _pthread_cleanup_buffer buffer; @@ -533,9 +487,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, /* Reload signals. See above for MO. */ signals = atomic_load_acquire (cond->__data.__g_signals + g); } - - if (seq < (__condvar_load_g1_start_relaxed (cond) >> 1)) - goto done; } /* Try to grab a signal. See above for MO. (if we do another loop iteration we need to see the correct value of g1_start) */