From patchwork Sat Jan 16 20:49:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Malte Skarupke X-Patchwork-Id: 41732 X-Patchwork-Delegate: carlos@redhat.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 8D108388A03C; Sat, 16 Jan 2021 20:50:11 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 8D108388A03C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1610830211; bh=d5yXxMk0nFEn5paTGMub8wV/3rbWyCtnpXue1tPECgY=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=DyDeYNfdVGcbrqAwABJaE6ulCsTMh/W/7339hNDBCRETo2gTBbcY0RG0I/7smKICZ jTLnJInNHvwhyHeGMewqqKUpYvmb8U3jCoNbKGsillgIv3LRDc5Xmjd82Cu1Kxrd3w fGYpyUlJahN7iwjpBO8dHyKP/OouFdi0j406u9M4= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mout.web.de (mout.web.de [217.72.192.78]) by sourceware.org (Postfix) with ESMTPS id 6177A388A03C for ; Sat, 16 Jan 2021 20:50:07 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 6177A388A03C X-UI-Sender-Class: c548c8c5-30a9-4db5-a2e7-cb6cb037b8f9 Received: from auth1-smtp.messagingengine.com ([66.111.4.227]) by smtp.web.de (mrweb103 [213.165.67.124]) with ESMTPSA (Nemesis) id 0M4I2R-1lqv6i1V5P-00rpQB; Sat, 16 Jan 2021 21:49:59 +0100 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 96D8E27C0054; Sat, 16 Jan 2021 15:49:57 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Sat, 16 Jan 2021 15:49:57 -0500 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrtdeggddugeehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofestddtredtredttd enucfhrhhomhepofgrlhhtvgcuufhkrghruhhpkhgvuceomhgrlhhtvghskhgrrhhuphhk vgesfigvsgdruggvqeenucggtffrrghtthgvrhhnpeffkeffudeilefhuefgvefhtdfgud duueehjeekkeelgfduleeitdevveeufffgjeenucffohhmrghinhepphhrohgsrggslhih uggrnhgtvgdrtghomhenucfkphepieejrddvheegrdduieehrdelnecuvehluhhsthgvrh fuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrlhhtvghskhgrrhhuphhk vgdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidquddtudejtdefvdeliedqud duvdegvdekieelqdhmrghlthgvshhkrghruhhpkhgvpeepfigvsgdruggvsehfrghsthhm rghilhdrfhhm X-ME-Proxy: Received: from localhost.localdomain (unknown [67.254.165.9]) by mail.messagingengine.com (Postfix) with ESMTPA id EA670108005B; Sat, 16 Jan 2021 15:49:56 -0500 (EST) To: libc-alpha@sourceware.org Subject: [PATCH 1/5] nptl: Fix pthread_cond_signal missing a sleeper (#BZ 25847) Date: Sat, 16 Jan 2021 15:49:46 -0500 Message-Id: <20210116204950.16434-1-malteskarupke@web.de> X-Mailer: git-send-email 2.17.1 X-Provags-ID: V03:K1:TwWHYoZBbKh/ZHFjkHKGHJXjLdWHaNZ0oANLGLRLD6Nyf7oRs9r +okFkg2tZnGIWO3ahFNiL/I+lxxp51gxyrZwJ3Lemrw9aHhVGzQiMGzSXuae1c6sEjcFA2H rgW5U0LiSdT5y8wol06ixRa/ORwjvRk3S3KqLGDBRZudBIGZ2OGt7MtWUyrTEF9TmiYJsLE 03U0nCzl7fqQGdFUB7B9A== X-UI-Out-Filterresults: notjunk:1;V03:K0:nTEvyZwzMpw=:fwJ5z/5Ytnrrwd+wVQwLPE jMwFR0vJLKXHP9pqqgcqXs6ieFqhiLp7VS69aEM2NFa7LG+Q7pPhLVYBfALSrOLJe4t2mP/6k 69VhQCLMkO5jeUrv35ThBSnxYt1fepiHiKGv0bLHbrmGKV4X71q+MaHpdTUqr6BfZcRs+7Kpb v7ONFgZLJnEAK6zQjpjsYhIIYzaYNx0sWZS9L/1VugWZzHK/UeCtq/DAMPpTbuOCZFKvI1lQI rI1SVHeg3Ne9b5cplyE1goWu5Me1XEVyh9ZMUf19tnuNmBdKW29Ih9Bm0Zx2C+9VkGulVP/7N rGCFcxerYKUjLisBNApzvKiSHbCIM1rcFM0Sq43eIW1a7jYO0OqIQE/+CV9kADPgLM73Tyl0R qP0SQ7TDkYZRHW3DTxLeJDlJORZW3QJv6NyrxiDVjd/coNKWAB9kzwonOoKOl2J1FJu8axsxd dNorR6vdCCsZA3L6k9IhhA3pRuIDZ8YYvd0aWP6lbXOxRCHuljFbB+SfVq7u24E2WIJmt0HFw 0rlIvIssGxx5tnX8g/Swq075oMgiIl1nmeql0sn+echQhanLRG/DRkvqM3EnXdkAZe0tmmF9A xpJ6Q2S77Usl+URDGv9D7vMUlOHSyQO73ygemUQX2c1mOFybkgJhtzAwzm7EEiNI/R35Sss91 xpK7qwLMI/gjaeBoChATXLOAyzhOOAvr7yolazqpzMTRudFbVc2Jl2zgHp36t6dELzCOteeMA GbfkZdRifd6zjd+ghPZK6w3TFuAPVgawjW3IZOvIE25uxmk9pJg+kR1Jrr5/CFBzlfmUYkFQa TlAfUoQdMA0pELQZb82uc0l1L9dv5hu65a3WWj1K9HOzNvODDHCazCkfsetZ5fiDatLo4cZS6 jpnm7d5IBtQVkxNStm4A== X-Spam-Status: No, score=-13.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Malte Skarupke via Libc-alpha From: Malte Skarupke Reply-To: Malte Skarupke Cc: Malte Skarupke , triegel@redhat.com, malteskarupke@fastmail.fm Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" There was a rare bug in pthread_cond_wait's handling of the case when a signal was stolen because a waiter took a long time to leave pthread_cond_wait. I wrote about the bug here: https://probablydance.com/2020/10/31/using-tla-in-the-real-world-to-understand-a-glibc-bug/ The bug was subtle and only happened in an edge-case of an edge-case so rather than fixing it, I decided to remove the outer edge-case: By broadening the scope of grefs, stealing of signals becomes impossible. A signaling thread will always wait for all waiters to leave pthread_cond_wait before closing a group, so now no waiter from the past can come back and steal a signal from a future group. This change is the minimal amount of changes necessary to fix the bug. This leads to slightly slower performance, but the next two patches in this series will undo most of that damage. --- nptl/pthread_cond_wait.c | 29 +++++++++++------------------ 1 file changed, 11 insertions(+), 18 deletions(-) -- 2.17.1 diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c index 02d11c61db..0f50048c0b 100644 --- a/nptl/pthread_cond_wait.c +++ b/nptl/pthread_cond_wait.c @@ -405,6 +405,10 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, unsigned int g = wseq & 1; uint64_t seq = wseq >> 1; + /* Acquire a group reference and use acquire MO for that so that we + synchronize with the dummy read-modify-write in + __condvar_quiesce_and_switch_g1 if we read from that. */ + atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); /* Increase the waiter reference count. Relaxed MO is sufficient because we only need to synchronize when decrementing the reference count. */ unsigned int flags = atomic_fetch_add_relaxed (&cond->__data.__wrefs, 8); @@ -422,6 +426,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, { __condvar_cancel_waiting (cond, seq, g, private); __condvar_confirm_wakeup (cond, private); + __condvar_dec_grefs (cond, g, private); return err; } @@ -471,24 +476,14 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, break; /* No signals available after spinning, so prepare to block. - We first acquire a group reference and use acquire MO for that so - that we synchronize with the dummy read-modify-write in - __condvar_quiesce_and_switch_g1 if we read from that. In turn, - in this case this will make us see the closed flag on __g_signals - that designates a concurrent attempt to reuse the group's slot. - We use acquire MO for the __g_signals check to make the - __g1_start check work (see spinning above). - Note that the group reference acquisition will not mask the - release MO when decrementing the reference count because we use - an atomic read-modify-write operation and thus extend the release - sequence. */ - atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); + First check the closed flag on __g_signals that designates a + concurrent attempt to reuse the group's slot. We use acquire MO for + the __g_signals check to make the __g1_start check work (see + spinning above). */ if (((atomic_load_acquire (cond->__data.__g_signals + g) & 1) != 0) || (seq < (__condvar_load_g1_start_relaxed (cond) >> 1))) { - /* Our group is closed. Wake up any signalers that might be - waiting. */ - __condvar_dec_grefs (cond, g, private); + /* Our group is closed. */ goto done; } @@ -508,7 +503,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, if (__glibc_unlikely (err == ETIMEDOUT || err == EOVERFLOW)) { - __condvar_dec_grefs (cond, g, private); /* If we timed out, we effectively cancel waiting. Note that we have decremented __g_refs before cancellation, so that a deadlock between waiting for quiescence of our group in @@ -518,8 +512,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, result = err; goto done; } - else - __condvar_dec_grefs (cond, g, private); /* Reload signals. See above for MO. */ signals = atomic_load_acquire (cond->__data.__g_signals + g); @@ -602,6 +594,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, to allow for execution of pthread_cond_destroy while having acquired the mutex. */ __condvar_confirm_wakeup (cond, private); + __condvar_dec_grefs (cond, g, private); /* Woken up; now re-acquire the mutex. If this doesn't fail, return RESULT, which is set to ETIMEDOUT if a timeout occured, or zero otherwise. */ From patchwork Sat Jan 16 20:49:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Malte Skarupke X-Patchwork-Id: 41731 X-Patchwork-Delegate: carlos@redhat.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 038653846047; Sat, 16 Jan 2021 20:50:11 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 038653846047 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1610830211; bh=Tvtc5fKWhSISsj9Yh4d2R6VSpkqocWXFB8Z/S2+sCbQ=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=Hz9DWHT+swajQDI1p14cl623eNpfjaIZ90SQuEPNwQtp5c4DfPOCwMx8BBolCA+Gr cH68AGbLfwAq9mIJjAJ9KS9NxU5ayCh7fZQ+MIRAxkRA9FT+IWECzDyR3f4lesGkiY u+ToMzXpQZH4kmpsC070mXPGyw1fXkLuBUiH0isU= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mout.web.de (mout.web.de [212.227.17.12]) by sourceware.org (Postfix) with ESMTPS id 95665388A407 for ; Sat, 16 Jan 2021 20:50:07 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 95665388A407 X-UI-Sender-Class: c548c8c5-30a9-4db5-a2e7-cb6cb037b8f9 Received: from auth1-smtp.messagingengine.com ([66.111.4.227]) by smtp.web.de (mrweb103 [213.165.67.124]) with ESMTPSA (Nemesis) id 0MMVu6-1l81eb1VVe-008HEP; Sat, 16 Jan 2021 21:49:59 +0100 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 99A3B27C005B; Sat, 16 Jan 2021 15:49:57 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Sat, 16 Jan 2021 15:49:57 -0500 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrtdeggddugeehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhestddtredtre dttdenucfhrhhomhepofgrlhhtvgcuufhkrghruhhpkhgvuceomhgrlhhtvghskhgrrhhu phhkvgesfigvsgdruggvqeenucggtffrrghtthgvrhhnpeduffeufedtudehhfffieejfe ehfeefledvffffffehhfefffelffeiudelteehfeenucfkphepieejrddvheegrdduieeh rdelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmh grlhhtvghskhgrrhhuphhkvgdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidq uddtudejtdefvdeliedqudduvdegvdekieelqdhmrghlthgvshhkrghruhhpkhgvpeepfi gvsgdruggvsehfrghsthhmrghilhdrfhhm X-ME-Proxy: Received: from localhost.localdomain (unknown [67.254.165.9]) by mail.messagingengine.com (Postfix) with ESMTPA id 1E8AA108005C; Sat, 16 Jan 2021 15:49:57 -0500 (EST) To: libc-alpha@sourceware.org Subject: [PATCH 2/5] nptl: Remove the signal-stealing code. It is no longer needed. Date: Sat, 16 Jan 2021 15:49:47 -0500 Message-Id: <20210116204950.16434-2-malteskarupke@web.de> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210116204950.16434-1-malteskarupke@web.de> References: <20210116204950.16434-1-malteskarupke@web.de> X-Provags-ID: V03:K1:N0In6E6eRk8gcjW5klB6jg+RFcbk8Aroj4SlxazBpGTPlfI/zxp 2Rh4n8hF8A3usSvEN1NHqkq/GxUdBBa+s5G7LcSZi2xvT7hdmp6Oag0S6R9BWgFJQio59px AfwICTYkY6+inWYx1X/gzODQlFk85U+6xYetQt19Q1cLxmztQ3n0Xei0boCTlDe0yohJY31 yV2bNjermj8PFFB47SgPw== X-UI-Out-Filterresults: notjunk:1;V03:K0:NPtFl+O6au8=:sx4DgC6DSsei+q+hQakESD EMIH/s6PAi00b4eqXPMwqo3IO0nPRSUDoYhaMh1SRotZsYKv60Gq20+YRBdBJeaQe/KitLEW4 zx+8mvyX1wtk7Sjrtgj/7G9gNwVJHij9QiTuzC3bK8MLwYCTdhqntF8b/YPnd8hlFrFNk8BIE FaGXLo2xzhxpeU3NUL/9rMrcDehaje1I78i7vSKghu3+kXvzm8JUVXjrgOTUg74DK1Jgc44HN ERr9A/FoopHirgtOpLsbYJmwkG6FFaw4BLBG+M5YErJzXPerm+XI3X3Ia+mImsfTXJZQGjPtm zyRMARDBXSw+CG1kyrshsPIeCvehKtBV+rfaFLLADp4ejokPTJYuK5TFUbkwo1MCyYAf5N6L8 bnQWmNJ9bKcSsraVSrb7GYjSZttaQKEleQUJcBPZ9Mh3CZzPHnkidTduo2B8K3TX6tkji57UN EFN1dps/kYmnmqBLWi7eQRctEnP5Bx0eNopH2Cs1v104RLCEGyAJtB144p/Bu8Enwzm8fD+A1 KgmNcPKl/qxSH4p+jL9dxVrVcNDwV39xoetbwsvXSTH3pPMnruxkdpFYLw0pON/pYoUxr9vpo +N1BgNDN4SNNeu+4wUXuG6jSDF75onWb+DicPBlvz0hjlL9oZAUpy51rUh72cpOgsjyF5AkDl 9ohvzbt9jUJRcqggWt053F2J83tLpgHlRKsXQm4NA/BfQcUsantOvoU35hkOuRVdUKrABdP60 a+4Tuz1Ze2vYksSDU3cG4DkHs8TYosHLc5C5NTGx+a0PToHK5LItipKqOq7uNPnNXGvpbm5iB VJLEWPlSTQLBmC2M8q6DYvE5dEoh0QDfKCH+llm8Wth088e+Ycp3/bFxKLik3zpIg/oYdT5PT AEySevOmC/wVPa0YyTgw== X-Spam-Status: No, score=-13.7 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Malte Skarupke via Libc-alpha From: Malte Skarupke Reply-To: Malte Skarupke Cc: Malte Skarupke , triegel@redhat.com, malteskarupke@fastmail.fm Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" After my last change, stealing of signals can no longer happen. This patch removes the code that handled the case when a signal was stolen. --- nptl/pthread_cond_wait.c | 63 ---------------------------------------- 1 file changed, 63 deletions(-) -- 2.17.1 diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c index 0f50048c0b..7b825f81df 100644 --- a/nptl/pthread_cond_wait.c +++ b/nptl/pthread_cond_wait.c @@ -525,69 +525,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, while (!atomic_compare_exchange_weak_acquire (cond->__data.__g_signals + g, &signals, signals - 2)); - /* We consumed a signal but we could have consumed from a more recent group - that aliased with ours due to being in the same group slot. If this - might be the case our group must be closed as visible through - __g1_start. */ - uint64_t g1_start = __condvar_load_g1_start_relaxed (cond); - if (seq < (g1_start >> 1)) - { - /* We potentially stole a signal from a more recent group but we do not - know which group we really consumed from. - We do not care about groups older than current G1 because they are - closed; we could have stolen from these, but then we just add a - spurious wake-up for the current groups. - We will never steal a signal from current G2 that was really intended - for G2 because G2 never receives signals (until it becomes G1). We - could have stolen a signal from G2 that was conservatively added by a - previous waiter that also thought it stole a signal -- but given that - that signal was added unnecessarily, it's not a problem if we steal - it. - Thus, the remaining case is that we could have stolen from the current - G1, where "current" means the __g1_start value we observed. However, - if the current G1 does not have the same slot index as we do, we did - not steal from it and do not need to undo that. This is the reason - for putting a bit with G2's index into__g1_start as well. */ - if (((g1_start & 1) ^ 1) == g) - { - /* We have to conservatively undo our potential mistake of stealing - a signal. We can stop trying to do that when the current G1 - changes because other spinning waiters will notice this too and - __condvar_quiesce_and_switch_g1 has checked that there are no - futex waiters anymore before switching G1. - Relaxed MO is fine for the __g1_start load because we need to - merely be able to observe this fact and not have to observe - something else as well. - ??? Would it help to spin for a little while to see whether the - current G1 gets closed? This might be worthwhile if the group is - small or close to being closed. */ - unsigned int s = atomic_load_relaxed (cond->__data.__g_signals + g); - while (__condvar_load_g1_start_relaxed (cond) == g1_start) - { - /* Try to add a signal. We don't need to acquire the lock - because at worst we can cause a spurious wake-up. If the - group is in the process of being closed (LSB is true), this - has an effect similar to us adding a signal. */ - if (((s & 1) != 0) - || atomic_compare_exchange_weak_relaxed - (cond->__data.__g_signals + g, &s, s + 2)) - { - /* If we added a signal, we also need to add a wake-up on - the futex. We also need to do that if we skipped adding - a signal because the group is being closed because - while __condvar_quiesce_and_switch_g1 could have closed - the group, it might stil be waiting for futex waiters to - leave (and one of those waiters might be the one we stole - the signal from, which cause it to block using the - futex). */ - futex_wake (cond->__data.__g_signals + g, 1, private); - break; - } - /* TODO Back off. */ - } - } - } - done: /* Confirm that we have been woken. We do that before acquiring the mutex From patchwork Sat Jan 16 20:49:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Malte Skarupke X-Patchwork-Id: 41734 X-Patchwork-Delegate: carlos@redhat.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id A01F3388A432; Sat, 16 Jan 2021 20:50:12 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org A01F3388A432 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1610830212; bh=0v3eEPLRWwVw9hWkkBdMcD8y9I1VXiDoxFpXwTUVmv8=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=ojUMHaX9H4I8dOO1plK6tXZY10lzLMHM0ikKKq3aIm+0GFYDpQJHW0X0bvfgnUkhh Zxdr3Eh4Vl/wy3YhaijS9aFlOpnZf5HfD993TGCUg1hFbqwSweZpMRy5HKCXo0vFka 8ci4Yz35RwTfVnZmk8k2oMD8abp0TgLsHpcWFAWE= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mout.web.de (mout.web.de [217.72.192.78]) by sourceware.org (Postfix) with ESMTPS id 612963836C38 for ; Sat, 16 Jan 2021 20:50:07 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 612963836C38 X-UI-Sender-Class: c548c8c5-30a9-4db5-a2e7-cb6cb037b8f9 Received: from auth1-smtp.messagingengine.com ([66.111.4.227]) by smtp.web.de (mrweb105 [213.165.67.124]) with ESMTPSA (Nemesis) id 1M3m9J-1l03RE1jQB-0012jL; Sat, 16 Jan 2021 21:49:59 +0100 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id 97FB827C005A; Sat, 16 Jan 2021 15:49:57 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Sat, 16 Jan 2021 15:49:57 -0500 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrtdeggddugeehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhestddtredtre dttdenucfhrhhomhepofgrlhhtvgcuufhkrghruhhpkhgvuceomhgrlhhtvghskhgrrhhu phhkvgesfigvsgdruggvqeenucggtffrrghtthgvrhhnpeduffeufedtudehhfffieejfe ehfeefledvffffffehhfefffelffeiudelteehfeenucfkphepieejrddvheegrdduieeh rdelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmh grlhhtvghskhgrrhhuphhkvgdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidq uddtudejtdefvdeliedqudduvdegvdekieelqdhmrghlthgvshhkrghruhhpkhgvpeepfi gvsgdruggvsehfrghsthhmrghilhdrfhhm X-ME-Proxy: Received: from localhost.localdomain (unknown [67.254.165.9]) by mail.messagingengine.com (Postfix) with ESMTPA id 4887E108005F; Sat, 16 Jan 2021 15:49:57 -0500 (EST) To: libc-alpha@sourceware.org Subject: [PATCH 3/5] nptl: Optimization by not incrementing wrefs in pthread_cond_wait Date: Sat, 16 Jan 2021 15:49:48 -0500 Message-Id: <20210116204950.16434-3-malteskarupke@web.de> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210116204950.16434-1-malteskarupke@web.de> References: <20210116204950.16434-1-malteskarupke@web.de> X-Provags-ID: V03:K1:PePhyC8Et9Qvd2YOx8/iaET6VZOvtJxHHwaI+S40W+dCVXpxpIN yyFRshoXqGFZAsbSSr+SDBtN059ZRQaLpI/9TE8xcVutDosG+PY8Y8pRXfEYRUl75NafoNJ uCbu2hd98Ex03m0sIZra/x0tvjhA5Qa8HyTH8wz2HQnihbhqkvZ9zfO52f7gIyw0sA4ql5v NMlfQwxHlCyAJkCNDpXUg== X-UI-Out-Filterresults: notjunk:1;V03:K0:oVqjzljuRuc=:mHsl+Uf0Z3O1gVg3jbUc4m hcZNis9LTY3gjrLVQEgllUwCMgh34vE5zbAlU8DTyLdPaGrst4vmhbJapC5xEcwrLljQ7qTSS y33e7i/CsALaqkfcRQ8ZvqwTN5Cwu6lNdKWBH4NR/jwkkchlx+sk2i/HM6P6K1WYxiPSEIAAs hISXArqcygYsv0t1FEzM5AHXNbETXcRKO9ZtNvXvan9I5LH7q0f8htJvhBXLikDYbmfX9//wT mH2DZbPNxCNBzrmco94gTJqWv/op9ycKEN7PaXSW3bWhKRP8JAmGdsUeLLdcKDTTWKV4Er7FP ekattZnQDuEi4Ypqeh1djjF9UTum2Ym7eG2BeX6EsIlJj+HqtaDYPWVqZ0G2N6144EIZ5yYIk ErMgrKe18r152AR6qemLiDnfJSTTyf9m19Tv6FhRlgkFE/YD5uVMhLOkRJu5+7rjHqDUdUog2 +/TeeYE+GOF3+HA7aI0fwHToBcS+AvdyNiTOczo/GA7jIU8BpfFMagv7T09umulhi0Jo7zPHc EhTfO/LYtUjKoICvRRwtmjOoAlnjI7VcfOLODZQdL1yXbdSvGJEQS8xSjrFnqVCdu2N2QXzQB xbv9XDjtibb6HK/j4QoLCGAwUJAU/4GLe7RAPJLPQw+TZq72EK6CVRMFJ72eXl6wUQH++HDmV z5saglqKcqVGf5CbpmTAvXSRMjeOYVqPHLMTy4EE/tSHXD2/gxuBMBs8PH0rTJqCNl/sX8+Ns pINpTy/hCjppzR5SfnKupPMoktXXbUKx9Mwh91aGVSQCDN9e3EbeeD11nuFrxi7v+pd+h29QH ZgZlSsPG5sAZzn93bKBXBi+z1/JfcD7/XAl30tsv6S+818lZyX7l19JmJqwNeNtP1JB/X1/tu VwLCbQFq+FyEDyKpleRw== X-Spam-Status: No, score=-13.9 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Malte Skarupke via Libc-alpha From: Malte Skarupke Reply-To: Malte Skarupke Cc: Malte Skarupke , triegel@redhat.com, malteskarupke@fastmail.fm Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" After I broadened the scope of grefs, it covered mostly the same scope as wrefs. The duplicate atomic increment/decrement was unnecessary. In this patch I remove the increment/decrement of wrefs. One exception is the case when pthread_cancel is handled. The interaction between __condvar_cleanup_waiting and pthread_cond_destroy is complicated and required both variables. So in order to preserve the existing behavior, I now increment/decrement wrefs in __condvar_cleanup_waiting. --- nptl/nptl-printers.py | 5 +++- nptl/nptl_lock_constants.pysym | 2 +- nptl/pthread_cond_broadcast.c | 8 +++--- nptl/pthread_cond_destroy.c | 29 ++++++++++++++++------ nptl/pthread_cond_signal.c | 8 +++--- nptl/pthread_cond_wait.c | 45 ++++++++++++++++------------------ 6 files changed, 57 insertions(+), 40 deletions(-) -- 2.17.1 diff --git a/nptl/nptl-printers.py b/nptl/nptl-printers.py index e794034d83..1a404befe5 100644 --- a/nptl/nptl-printers.py +++ b/nptl/nptl-printers.py @@ -313,6 +313,7 @@ class ConditionVariablePrinter(object): data = cond['__data'] self.wrefs = data['__wrefs'] + self.grefs = data['__g_refs'] self.values = [] self.read_values() @@ -350,8 +351,10 @@ class ConditionVariablePrinter(object): are waiting for it. """ + num_readers_g0 = self.grefs[0] >> PTHREAD_COND_GREFS_SHIFT + num_readers_g1 = self.grefs[1] >> PTHREAD_COND_GREFS_SHIFT self.values.append(('Threads known to still execute a wait function', - self.wrefs >> PTHREAD_COND_WREFS_SHIFT)) + num_readers_g0 + num_readers_g1)) def read_attributes(self): """Read the condvar's attributes.""" diff --git a/nptl/nptl_lock_constants.pysym b/nptl/nptl_lock_constants.pysym index ade4398e0c..2141cfa1f0 100644 --- a/nptl/nptl_lock_constants.pysym +++ b/nptl/nptl_lock_constants.pysym @@ -50,7 +50,7 @@ PTHREAD_COND_SHARED_MASK __PTHREAD_COND_SHARED_MASK PTHREAD_COND_CLOCK_MONOTONIC_MASK __PTHREAD_COND_CLOCK_MONOTONIC_MASK COND_CLOCK_BITS -- These values are hardcoded: -PTHREAD_COND_WREFS_SHIFT 3 +PTHREAD_COND_GREFS_SHIFT 1 -- Rwlock attributes PTHREAD_RWLOCK_PREFER_READER_NP diff --git a/nptl/pthread_cond_broadcast.c b/nptl/pthread_cond_broadcast.c index 8d887aab93..e10432ce7c 100644 --- a/nptl/pthread_cond_broadcast.c +++ b/nptl/pthread_cond_broadcast.c @@ -40,10 +40,12 @@ __pthread_cond_broadcast (pthread_cond_t *cond) { LIBC_PROBE (cond_broadcast, 1, cond); - unsigned int wrefs = atomic_load_relaxed (&cond->__data.__wrefs); - if (wrefs >> 3 == 0) + unsigned int grefs0 = atomic_load_relaxed (cond->__data.__g_refs); + unsigned int grefs1 = atomic_load_relaxed (cond->__data.__g_refs + 1); + if ((grefs0 >> 1) == 0 && (grefs1 >> 1) == 0) return 0; - int private = __condvar_get_private (wrefs); + unsigned int flags = atomic_load_relaxed (&cond->__data.__wrefs); + int private = __condvar_get_private (flags); __condvar_acquire_lock (cond, private); diff --git a/nptl/pthread_cond_destroy.c b/nptl/pthread_cond_destroy.c index 31034905d1..1c27385f89 100644 --- a/nptl/pthread_cond_destroy.c +++ b/nptl/pthread_cond_destroy.c @@ -37,22 +37,35 @@ signal or broadcast calls. Thus, we can assume that all waiters that are still accessing the condvar have been woken. We wait until they have confirmed to have woken up by - decrementing __wrefs. */ + decrementing __g_refs. */ int __pthread_cond_destroy (pthread_cond_t *cond) { LIBC_PROBE (cond_destroy, 1, cond); - /* Set the wake request flag. We could also spin, but destruction that is - concurrent with still-active waiters is probably neither common nor - performance critical. Acquire MO to synchronize with waiters confirming - that they finished. */ - unsigned int wrefs = atomic_fetch_or_acquire (&cond->__data.__wrefs, 4); - int private = __condvar_get_private (wrefs); + unsigned int flags = atomic_load_relaxed (&cond->__data.__wrefs); + int private = __condvar_get_private (flags); + for (unsigned g = 0; g < 2; ++g) + { + while (true) + { + /* Set the wake request flag. We could also spin, but destruction that is + concurrent with still-active waiters is probably neither common nor + performance critical. Acquire MO to synchronize with waiters confirming + that they finished. */ + unsigned r = atomic_fetch_or_acquire (cond->__data.__g_refs + g, 1) | 1; + if (r == 1) + break; + futex_wait_simple (cond->__data.__g_refs + g, r, private); + } + } + + /* Same as above, except to synchronize with canceled threads. This wake + flag never gets cleared, so it's enough to set it once. */ + unsigned int wrefs = atomic_fetch_or_acquire (&cond->__data.__wrefs, 4) | 4; while (wrefs >> 3 != 0) { futex_wait_simple (&cond->__data.__wrefs, wrefs, private); - /* See above. */ wrefs = atomic_load_acquire (&cond->__data.__wrefs); } /* The memory the condvar occupies can now be reused. */ diff --git a/nptl/pthread_cond_signal.c b/nptl/pthread_cond_signal.c index 4281ad4d3b..0cd534cc40 100644 --- a/nptl/pthread_cond_signal.c +++ b/nptl/pthread_cond_signal.c @@ -39,10 +39,12 @@ __pthread_cond_signal (pthread_cond_t *cond) /* First check whether there are waiters. Relaxed MO is fine for that for the same reasons that relaxed MO is fine when observing __wseq (see below). */ - unsigned int wrefs = atomic_load_relaxed (&cond->__data.__wrefs); - if (wrefs >> 3 == 0) + unsigned int grefs0 = atomic_load_relaxed (cond->__data.__g_refs); + unsigned int grefs1 = atomic_load_relaxed (cond->__data.__g_refs + 1); + if ((grefs0 >> 1) == 0 && (grefs1 >> 1) == 0) return 0; - int private = __condvar_get_private (wrefs); + unsigned int flags = atomic_load_relaxed (&cond->__data.__wrefs); + int private = __condvar_get_private (flags); __condvar_acquire_lock (cond, private); diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c index 7b825f81df..0ee0247874 100644 --- a/nptl/pthread_cond_wait.c +++ b/nptl/pthread_cond_wait.c @@ -43,19 +43,6 @@ struct _condvar_cleanup_buffer }; -/* Decrease the waiter reference count. */ -static void -__condvar_confirm_wakeup (pthread_cond_t *cond, int private) -{ - /* If destruction is pending (i.e., the wake-request flag is nonzero) and we - are the last waiter (prior value of __wrefs was 1 << 3), then wake any - threads waiting in pthread_cond_destroy. Release MO to synchronize with - these threads. Don't bother clearing the wake-up request flag. */ - if ((atomic_fetch_add_release (&cond->__data.__wrefs, -8) >> 2) == 3) - futex_wake (&cond->__data.__wrefs, INT_MAX, private); -} - - /* Cancel waiting after having registered as a waiter previously. SEQ is our position and G is our group index. The goal of cancellation is to make our group smaller if that is still @@ -81,7 +68,7 @@ __condvar_cancel_waiting (pthread_cond_t *cond, uint64_t seq, unsigned int g, { bool consumed_signal = false; - /* No deadlock with group switching is possible here because we have do + /* No deadlock with group switching is possible here because we do not hold a reference on the group. */ __condvar_acquire_lock (cond, private); @@ -172,6 +159,14 @@ __condvar_cleanup_waiting (void *arg) pthread_cond_t *cond = cbuffer->cond; unsigned g = cbuffer->wseq & 1; + /* Normally we are not allowed to touch cond any more after calling + __condvar_dec_grefs, because pthread_cond_destroy looks at __g_refs to + determine when all waiters have woken. Since we will do more work in this + function, we are using an extra channel to communicate to + pthread_cond_destroy that it is not allowed to finish yet: We increment + the fourth bit on __wrefs. Relaxed MO is enough. The synchronization + happens because __condvar_dec_grefs uses release MO. */ + atomic_fetch_add_relaxed (&cond->__data.__wrefs, 8); __condvar_dec_grefs (cond, g, cbuffer->private); __condvar_cancel_waiting (cond, cbuffer->wseq >> 1, g, cbuffer->private); @@ -183,7 +178,12 @@ __condvar_cleanup_waiting (void *arg) conservatively. */ futex_wake (cond->__data.__g_signals + g, 1, cbuffer->private); - __condvar_confirm_wakeup (cond, cbuffer->private); + /* If destruction is pending (i.e., the wake-request flag is nonzero) and we + are the last waiter (prior value of __wrefs was 1 << 3), then wake any + threads waiting in pthread_cond_destroy. Release MO to synchronize with + these threads. Don't bother clearing the wake-up request flag. */ + if ((atomic_fetch_add_release (&cond->__data.__wrefs, -8) >> 2) == 3) + futex_wake (&cond->__data.__wrefs, INT_MAX, cbuffer->private); /* XXX If locking the mutex fails, should we just stop execution? This might be better than silently ignoring the error. */ @@ -287,20 +287,21 @@ __condvar_cleanup_waiting (void *arg) __g1_orig_size: Initial size of G1 * The two least-significant bits represent the condvar-internal lock. * Only accessed while having acquired the condvar-internal lock. - __wrefs: Waiter reference counter. + __wrefs: Flags and count of waiters who called pthread_cancel. * Bit 2 is true if waiters should run futex_wake when they remove the last reference. pthread_cond_destroy uses this as futex word. * Bit 1 is the clock ID (0 == CLOCK_REALTIME, 1 == CLOCK_MONOTONIC). * Bit 0 is true iff this is a process-shared condvar. - * Simple reference count used by both waiters and pthread_cond_destroy. - (If the format of __wrefs is changed, update nptl_lock_constants.pysym - and the pretty printers.) + * Simple reference count used by __condvar_cleanup_waiting and pthread_cond_destroy. + (If the format of __wrefs is changed, update the pretty printers.) For each of the two groups, we have: __g_refs: Futex waiter reference count. * LSB is true if waiters should run futex_wake when they remove the last reference. * Reference count used by waiters concurrently with signalers that have acquired the condvar-internal lock. + (If the format of __g_refs is changed, update nptl_lock_constants.pysym + and the pretty printers.) __g_signals: The number of signals that can still be consumed. * Used as a futex word by waiters. Used concurrently by waiters and signalers. @@ -409,9 +410,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, synchronize with the dummy read-modify-write in __condvar_quiesce_and_switch_g1 if we read from that. */ atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); - /* Increase the waiter reference count. Relaxed MO is sufficient because - we only need to synchronize when decrementing the reference count. */ - unsigned int flags = atomic_fetch_add_relaxed (&cond->__data.__wrefs, 8); + unsigned int flags = atomic_load_relaxed (&cond->__data.__wrefs); int private = __condvar_get_private (flags); /* Now that we are registered as a waiter, we can release the mutex. @@ -425,7 +424,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, if (__glibc_unlikely (err != 0)) { __condvar_cancel_waiting (cond, seq, g, private); - __condvar_confirm_wakeup (cond, private); __condvar_dec_grefs (cond, g, private); return err; } @@ -530,7 +528,6 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, /* Confirm that we have been woken. We do that before acquiring the mutex to allow for execution of pthread_cond_destroy while having acquired the mutex. */ - __condvar_confirm_wakeup (cond, private); __condvar_dec_grefs (cond, g, private); /* Woken up; now re-acquire the mutex. If this doesn't fail, return RESULT, From patchwork Sat Jan 16 20:49:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Malte Skarupke X-Patchwork-Id: 41733 X-Patchwork-Delegate: carlos@redhat.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 1CDA1388A415; Sat, 16 Jan 2021 20:50:12 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 1CDA1388A415 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1610830212; bh=E321S/0aoNcPhBOSMNsOMKD1Vxaw7sBSkj5sChGR8Qo=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=fRCIMpnEvpiWYr8Vztjmsza2mHIHx3fAai2LBdT7LYFqHNle9K+FUgR/ph9GWuQul 9awTyNHk/6UmJ2obi1+DoKUx2AJwW2JI/Q/QxZ4Fuo0GF2yORsM3TSZogZLev7YTuh zdv/ayH9XqlrIVTuGjazdxWoFZYriJ42a4DnOw88= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mout.web.de (mout.web.de [217.72.192.78]) by sourceware.org (Postfix) with ESMTPS id F16673846047 for ; Sat, 16 Jan 2021 20:50:07 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org F16673846047 X-UI-Sender-Class: c548c8c5-30a9-4db5-a2e7-cb6cb037b8f9 Received: from auth1-smtp.messagingengine.com ([66.111.4.227]) by smtp.web.de (mrweb101 [213.165.67.124]) with ESMTPSA (Nemesis) id 0MPpQU-1l4iGh0Z2A-0050Dc; Sat, 16 Jan 2021 21:50:01 +0100 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id AFB1D27C005C; Sat, 16 Jan 2021 15:49:57 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Sat, 16 Jan 2021 15:49:57 -0500 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrtdeggddugeehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhestddtredtre dttdenucfhrhhomhepofgrlhhtvgcuufhkrghruhhpkhgvuceomhgrlhhtvghskhgrrhhu phhkvgesfigvsgdruggvqeenucggtffrrghtthgvrhhnpeduffeufedtudehhfffieejfe ehfeefledvffffffehhfefffelffeiudelteehfeenucfkphepieejrddvheegrdduieeh rdelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmh grlhhtvghskhgrrhhuphhkvgdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidq uddtudejtdefvdeliedqudduvdegvdekieelqdhmrghlthgvshhkrghruhhpkhgvpeepfi gvsgdruggvsehfrghsthhmrghilhdrfhhm X-ME-Proxy: Received: from localhost.localdomain (unknown [67.254.165.9]) by mail.messagingengine.com (Postfix) with ESMTPA id 6F9721080063; Sat, 16 Jan 2021 15:49:57 -0500 (EST) To: libc-alpha@sourceware.org Subject: [PATCH 4/5] nptl: Make test-cond-printers check the number of waiters Date: Sat, 16 Jan 2021 15:49:49 -0500 Message-Id: <20210116204950.16434-4-malteskarupke@web.de> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210116204950.16434-1-malteskarupke@web.de> References: <20210116204950.16434-1-malteskarupke@web.de> X-Provags-ID: V03:K1:gYQ89p9IEjK3rNkXkSrhp1CrNq3buw4p8tXa6We9DJRRNL1HtKj aax5ao/HnUAr5AFljusEjXbviVX4cKKBVKJ6H0McuVkLdVav/fv8bvgwDcYxYO3GwcfjHbT IEdAepJ+itqDDBGnQHnKkhMvrZtzv8Fk4Egku/jbSvAQMdzlR63VPDFdpFnWb3a1DSQ14kE Yd/jTPECF8u94JDgalFkw== X-UI-Out-Filterresults: notjunk:1;V03:K0:iU/YpsxAwPE=:nzq4z+pVgJKxZm7wazMPp+ DtsCeaA7wmaq5F1FKOngl+wz40rzV623LTkp/5S0WiJlbQQ5KAk3P6Ue9X48jBRFN7B6xMT1g RBsOpMNRh2JAS9VCdZiKb+OiUvHzdin2lj4uhFAz4T3E11wFGVlQA5kQhs1dmoBtsrHaU4EzS kLofiqShIz7Q047y0m/+6B7kHQUeo5xyy/HSwsNX8hqUBdmK16LPmB+JYG5ZmnGnb1yp/Xp0z 7XaRqxC6VC4x5JmcBfvN+o7LS3oji6u4CDd1WFmAygnEmD3pnZgqAcjbVNDBfOB2uH4gmbrDh 5nKh/DS7ze+dZ3zm2vdkjmi/wlNZH5Mp/YHgo47bwsVDuq7GNgWodCRGab2VrrZftp7Lq708f LV+8RVG1Hp2marjy6YUVsMzd8Vg3L5zE2e8CapTs37Bc+mx9SIH3CUcQNeGW9NuIsbtgt5Xam RAuhsvWRj4b8RFXDTMaF7EslB7OZBXvXJQ6tnqhpnEWOAmJOLcQMDIQlg8fbKnvxVUR2d1egT KROokwU/+0DtuglQlGKj/VedbKutNKXJx/yRUBFwbvkcTsMOrVIpr6rZZI0X+f3Wh3jFzb5p8 T7UVadeU5lx5pE7viUH8AdWYg+fI63lWfzvjOqFpL6lIeyDgwnwGqeQmr1rAYIQ0gpFPcarB2 jUrpI1TMTRWWlC5rRgUnMB0pX0srII4iahd/uGHBGhVGc/NKy4izzRIbkX26p5VXS9BOkGfBh 88ciPT8xg1/ivy93IsolpV/eDk3yKHl1DD0tFamBeIFBl2mqo7BtRZY0Cjvg3PNWn4xK+LLdG 3RyiQc5rpv6K1RKmWCUoO+ip7dSrrJ2Gv5oEC23p+/pKtRO/zBBPglTWzO2p4ECV42BY0snFP nrFF3htINGe14V1ALzpw== X-Spam-Status: No, score=-13.8 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Malte Skarupke via Libc-alpha From: Malte Skarupke Reply-To: Malte Skarupke Cc: Malte Skarupke , triegel@redhat.com, malteskarupke@fastmail.fm Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" In my last change I changed the semantics of how to determine the number of waiters on a condition variable. The existing test only tested that the printers print something. They didn't cover the case when there is a thread sleeping on the condition variable. In this patch I changed the test to ensure that the correct number is printed. This is just to double-check the changes from my previous patch. --- nptl/test-cond-printers.c | 56 +++++++++++++++++++++++++++++++++----- nptl/test-cond-printers.py | 5 ++++ 2 files changed, 54 insertions(+), 7 deletions(-) -- 2.17.1 diff --git a/nptl/test-cond-printers.c b/nptl/test-cond-printers.c index 4b6db831f9..603a7cccee 100644 --- a/nptl/test-cond-printers.c +++ b/nptl/test-cond-printers.c @@ -26,7 +26,14 @@ #define PASS 0 #define FAIL 1 -static int test_status_destroyed (pthread_cond_t *condvar); +static int test_status (pthread_cond_t *condvar); + +typedef struct +{ + pthread_mutex_t *mutex; + pthread_cond_t *condvar; + int *wait_thread_asleep; +} test_state; int main (void) @@ -36,22 +43,57 @@ main (void) int result = FAIL; if (pthread_condattr_init (&attr) == 0 - && test_status_destroyed (&condvar) == PASS) + && test_status (&condvar) == PASS) result = PASS; /* Else, one of the pthread_cond* functions failed. */ return result; } +static void * +wait (void *arg) +{ + test_state *state = (test_state *)arg; + void *result = PASS; + if (pthread_mutex_lock (state->mutex) != 0) + result = (void *)FAIL; + *state->wait_thread_asleep = 1; + if (pthread_cond_signal (state->condvar) != 0) + result = (void *)FAIL; + if (pthread_cond_wait (state->condvar, state->mutex) != 0) + result = (void *)FAIL; + if (pthread_mutex_unlock (state->mutex) != 0) + result = (void *)FAIL; + return result; +} + /* Initializes CONDVAR, then destroys it. */ static int -test_status_destroyed (pthread_cond_t *condvar) +test_status (pthread_cond_t *condvar) { - int result = FAIL; + int result = PASS; - if (pthread_cond_init (condvar, NULL) == 0 - && pthread_cond_destroy (condvar) == 0) - result = PASS; /* Test status (destroyed). */ + pthread_mutex_t mutex; + result |= pthread_mutex_init (&mutex, NULL); + result |= pthread_cond_init (condvar, NULL); + int wait_thread_asleep = 0; + test_state state = { &mutex, condvar, &wait_thread_asleep }; + result |= pthread_mutex_lock (&mutex); + pthread_t thread; + result |= pthread_create (&thread, NULL, wait, &state); + while (!wait_thread_asleep) + { + result |= pthread_cond_wait (condvar, &mutex); + } + result |= pthread_cond_signal (condvar); /* Test about to signal */ + result |= pthread_mutex_unlock (&mutex); + result |= pthread_cond_destroy (condvar); + void *retval = NULL; + result |= pthread_join (thread, &retval); /* Test status (destroyed). */ + result |= pthread_mutex_destroy (&mutex); + result = result ? FAIL : PASS; + if (retval != NULL) + result = FAIL; return result; } diff --git a/nptl/test-cond-printers.py b/nptl/test-cond-printers.py index 38e2da4269..ab12218802 100644 --- a/nptl/test-cond-printers.py +++ b/nptl/test-cond-printers.py @@ -33,6 +33,11 @@ try: var = 'condvar' to_string = 'pthread_cond_t' + break_at(test_source, 'Test about to signal') + continue_cmd() # Go to test_status_destroyed + test_printer(var, to_string, {'Threads known to still execute a wait function': '1'}) + + break_at(test_source, 'Test status (destroyed)') continue_cmd() # Go to test_status_destroyed test_printer(var, to_string, {'Threads known to still execute a wait function': '0'}) From patchwork Sat Jan 16 20:49:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Malte Skarupke X-Patchwork-Id: 41735 X-Patchwork-Delegate: carlos@redhat.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B9286398B844; Sat, 16 Jan 2021 20:50:13 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org B9286398B844 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1610830213; bh=1RuzH/CkgNJH7yaXNwexIc8pDSKKmOa0WqZfEbfVRYs=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=U+7lRfBbHWYjjHMYxxqVAGepNKcMiCaoMYBmpO9DYqrkrlB4dJnLPVBs1GY0nTpho TBdGWYIqaJJVcNe9z75wZKcBCPDUN/ElLC88xlKcZrbHKl3cfcJ83So9axz36bKDDi 66p0LERubNsBqGHPlwLTvsx3VIZHeLR4Su+jyOoc= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mout.web.de (mout.web.de [212.227.17.12]) by sourceware.org (Postfix) with ESMTPS id 8E9A1384A030 for ; Sat, 16 Jan 2021 20:50:08 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 8E9A1384A030 X-UI-Sender-Class: c548c8c5-30a9-4db5-a2e7-cb6cb037b8f9 Received: from auth1-smtp.messagingengine.com ([66.111.4.227]) by smtp.web.de (mrweb102 [213.165.67.124]) with ESMTPSA (Nemesis) id 0MBTEQ-1lAGhr3Rmo-00AUt2; Sat, 16 Jan 2021 21:50:02 +0100 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailauth.nyi.internal (Postfix) with ESMTP id DE90127C0060; Sat, 16 Jan 2021 15:49:57 -0500 (EST) Received: from mailfrontend2 ([10.202.2.163]) by compute3.internal (MEProxy); Sat, 16 Jan 2021 15:49:57 -0500 X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrtdeggddugeehucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkofgjfhestddtredtre dttdenucfhrhhomhepofgrlhhtvgcuufhkrghruhhpkhgvuceomhgrlhhtvghskhgrrhhu phhkvgesfigvsgdruggvqeenucggtffrrghtthgvrhhnpeduffeufedtudehhfffieejfe ehfeefledvffffffehhfefffelffeiudelteehfeenucfkphepieejrddvheegrdduieeh rdelnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmh grlhhtvghskhgrrhhuphhkvgdomhgvshhmthhprghuthhhphgvrhhsohhnrghlihhthidq uddtudejtdefvdeliedqudduvdegvdekieelqdhmrghlthgvshhkrghruhhpkhgvpeepfi gvsgdruggvsehfrghsthhmrghilhdrfhhm X-ME-Proxy: Received: from localhost.localdomain (unknown [67.254.165.9]) by mail.messagingengine.com (Postfix) with ESMTPA id 9935D1080064; Sat, 16 Jan 2021 15:49:57 -0500 (EST) To: libc-alpha@sourceware.org Subject: [PATCH 5/5] nptl: Rename __wrefs to __flags because its meaning has changed Date: Sat, 16 Jan 2021 15:49:50 -0500 Message-Id: <20210116204950.16434-5-malteskarupke@web.de> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210116204950.16434-1-malteskarupke@web.de> References: <20210116204950.16434-1-malteskarupke@web.de> X-Provags-ID: V03:K1:G8PJTgJpnRJZjCz9KKmuWMR46roNPCgtLnWhPe2Trc8HBKBvPwS na8+IemLtQsytS/oO2gBou0V6csQhUAb7os7Gml31tWJWT59+92cdU/G+LIz/6pvTLXGONv RzmKR4fWIbrTseBVCKeYkO7vhqvRLwipjbBoUK5yHytGa1dUqj/Lf2BfqPjvagF7+DcYlil be9iz3wUQWgqwA+/DPxxg== X-UI-Out-Filterresults: notjunk:1;V03:K0:RyGi9nK+DRg=:Wm2tGY2V7XDL9JwJSMpPzS KQE5+fEQ82or2djEIz8CD16MRh+AHWmW07uQkqpuUGmDnOkM+7f7iPRlSNJrWx33W5MyqreP8 3Qp5+96KyooSPfSrDZ/Gj26fBhEl8lxZ1dFNzwRYLsN4Fm1pLpmyaUt4iIktnNFUxUbRaSp8v NG12W6BrFwROGT3vAYQTrzXZaoGbyTQmi1PCvHAmYFozkeNU0+07/SfQOgAbZvlFPPQoha4Av IIpE2lSjT2wpk+PTB/n5qDVvZ6jDUB3Isiwd8gfq8eV+jJ8po33VZREdY7lzyOnmhyjZElEw8 XGyzYfg63QBehrSZn8uMbxhBmwm23/NGKJfDbQ5NgS1yOW7iRiYwEW/PFDDqlMbrUWGic5h5h k8d7v4Fubr/E754c4wY6m/Ctsh9D34/nYUUqFk0Use2aAhS9DAsSvDYFCf6hc7r++/cxVPUXb QOBb2WS2AABsCJu8bZBgIFOCBD4RJKhx4a50oe7C5ptaCEKAgN3mFwrEUHBpxjJCOHoHuI6Zp Jgn8dr+a5+L3d8ktvt8bP79l97j238wgxdTC1VMMPZQG4l30bhMJzkWN/y/OJDqgxs3KsttxP B+BAcDlvvXgmR0qUoi2UgTgrnrgkvFtAUIDDUPQc/ZoYiEQj4KgHGD/0Nf/wRlqHAOkwcaIWZ o5V4qscQATwMQxBBjYx4FgoshelDFrcs0eILRiFl7Ch89/lNkW+pUVcTzjqtuOzCODZDetHHO xyCrmuLDqj0wlY3bt6kiTyK8r1nQOfBKMX73o37NuTDHaiDEp5GdXchc08AtdgR2kcksLUzQ3 XWcZA/OPQgC1xPg2l3b003N8o+LqLwz+E5oG47c8EV7Dtj1rIOFHkKUfsEiX7AeIc2YLuD5CB l3cvUiOPpe+HlQEJvgKQ== X-Spam-Status: No, score=-13.9 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Malte Skarupke via Libc-alpha From: Malte Skarupke Reply-To: Malte Skarupke Cc: Malte Skarupke , triegel@redhat.com, malteskarupke@fastmail.fm Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" When I remove the increment/decrement of wrefs in pthread_cond_wait, it no longer really had the meaning of representing the number of waiters. So the name "wrefs" is no longer accurate. It is still used as a reference count in an edge case, in the interaction between pthread_cancel and pthread_cond_destroy, but that edge case shouldn't be what this variable is named after. The name __flags seems more appropriate since in the most common case this variable is just used to store the flags of the condition variable. --- nptl/nptl-printers.py | 6 +++--- nptl/pthread_cond_broadcast.c | 2 +- nptl/pthread_cond_common.c | 4 ++-- nptl/pthread_cond_destroy.c | 8 ++++---- nptl/pthread_cond_init.c | 4 ++-- nptl/pthread_cond_signal.c | 2 +- nptl/pthread_cond_wait.c | 16 ++++++++-------- nptl/tst-cond22.c | 4 ++-- sysdeps/nptl/bits/thread-shared-types.h | 2 +- 9 files changed, 24 insertions(+), 24 deletions(-) -- 2.17.1 diff --git a/nptl/nptl-printers.py b/nptl/nptl-printers.py index 1a404befe5..0e37d8b4dd 100644 --- a/nptl/nptl-printers.py +++ b/nptl/nptl-printers.py @@ -312,7 +312,7 @@ class ConditionVariablePrinter(object): """ data = cond['__data'] - self.wrefs = data['__wrefs'] + self.flags = data['__flags'] self.grefs = data['__g_refs'] self.values = [] @@ -359,12 +359,12 @@ class ConditionVariablePrinter(object): def read_attributes(self): """Read the condvar's attributes.""" - if (self.wrefs & PTHREAD_COND_CLOCK_MONOTONIC_MASK) != 0: + if (self.flags & PTHREAD_COND_CLOCK_MONOTONIC_MASK) != 0: self.values.append(('Clock ID', 'CLOCK_MONOTONIC')) else: self.values.append(('Clock ID', 'CLOCK_REALTIME')) - if (self.wrefs & PTHREAD_COND_SHARED_MASK) != 0: + if (self.flags & PTHREAD_COND_SHARED_MASK) != 0: self.values.append(('Shared', 'Yes')) else: self.values.append(('Shared', 'No')) diff --git a/nptl/pthread_cond_broadcast.c b/nptl/pthread_cond_broadcast.c index e10432ce7c..cbb249b35b 100644 --- a/nptl/pthread_cond_broadcast.c +++ b/nptl/pthread_cond_broadcast.c @@ -44,7 +44,7 @@ __pthread_cond_broadcast (pthread_cond_t *cond) unsigned int grefs1 = atomic_load_relaxed (cond->__data.__g_refs + 1); if ((grefs0 >> 1) == 0 && (grefs1 >> 1) == 0) return 0; - unsigned int flags = atomic_load_relaxed (&cond->__data.__wrefs); + unsigned int flags = atomic_load_relaxed (&cond->__data.__flags); int private = __condvar_get_private (flags); __condvar_acquire_lock (cond, private); diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c index 3251c7f0ec..22270e8805 100644 --- a/nptl/pthread_cond_common.c +++ b/nptl/pthread_cond_common.c @@ -20,7 +20,7 @@ #include #include -/* We need 3 least-significant bits on __wrefs for something else. */ +/* We need 3 least-significant bits on __flags for something else. */ #define __PTHREAD_COND_MAX_GROUP_SIZE ((unsigned) 1 << 29) #if __HAVE_64B_ATOMICS == 1 @@ -318,7 +318,7 @@ __condvar_set_orig_size (pthread_cond_t *cond, unsigned int size) atomic_store_relaxed (&cond->__data.__g1_orig_size, (size << 2) | 2); } -/* Returns FUTEX_SHARED or FUTEX_PRIVATE based on the provided __wrefs +/* Returns FUTEX_SHARED or FUTEX_PRIVATE based on the provided __flags value. */ static int __attribute__ ((unused)) __condvar_get_private (int flags) diff --git a/nptl/pthread_cond_destroy.c b/nptl/pthread_cond_destroy.c index 1c27385f89..3dfe4a48db 100644 --- a/nptl/pthread_cond_destroy.c +++ b/nptl/pthread_cond_destroy.c @@ -43,7 +43,7 @@ __pthread_cond_destroy (pthread_cond_t *cond) { LIBC_PROBE (cond_destroy, 1, cond); - unsigned int flags = atomic_load_relaxed (&cond->__data.__wrefs); + unsigned int flags = atomic_load_relaxed (&cond->__data.__flags); int private = __condvar_get_private (flags); for (unsigned g = 0; g < 2; ++g) { @@ -62,11 +62,11 @@ __pthread_cond_destroy (pthread_cond_t *cond) /* Same as above, except to synchronize with canceled threads. This wake flag never gets cleared, so it's enough to set it once. */ - unsigned int wrefs = atomic_fetch_or_acquire (&cond->__data.__wrefs, 4) | 4; + unsigned int wrefs = atomic_fetch_or_acquire (&cond->__data.__flags, 4) | 4; while (wrefs >> 3 != 0) { - futex_wait_simple (&cond->__data.__wrefs, wrefs, private); - wrefs = atomic_load_acquire (&cond->__data.__wrefs); + futex_wait_simple (&cond->__data.__flags, wrefs, private); + wrefs = atomic_load_acquire (&cond->__data.__flags); } /* The memory the condvar occupies can now be reused. */ return 0; diff --git a/nptl/pthread_cond_init.c b/nptl/pthread_cond_init.c index 595b1b3528..3031d52a42 100644 --- a/nptl/pthread_cond_init.c +++ b/nptl/pthread_cond_init.c @@ -37,13 +37,13 @@ __pthread_cond_init (pthread_cond_t *cond, const pthread_condattr_t *cond_attr) /* Iff not equal to ~0l, this is a PTHREAD_PROCESS_PRIVATE condvar. */ if (icond_attr != NULL && (icond_attr->value & 1) != 0) - cond->__data.__wrefs |= __PTHREAD_COND_SHARED_MASK; + cond->__data.__flags |= __PTHREAD_COND_SHARED_MASK; int clockid = (icond_attr != NULL ? ((icond_attr->value >> 1) & ((1 << COND_CLOCK_BITS) - 1)) : CLOCK_REALTIME); /* If 0, CLOCK_REALTIME is used; CLOCK_MONOTONIC otherwise. */ if (clockid != CLOCK_REALTIME) - cond->__data.__wrefs |= __PTHREAD_COND_CLOCK_MONOTONIC_MASK; + cond->__data.__flags |= __PTHREAD_COND_CLOCK_MONOTONIC_MASK; LIBC_PROBE (cond_init, 2, cond, cond_attr); diff --git a/nptl/pthread_cond_signal.c b/nptl/pthread_cond_signal.c index 0cd534cc40..979c9d72d5 100644 --- a/nptl/pthread_cond_signal.c +++ b/nptl/pthread_cond_signal.c @@ -43,7 +43,7 @@ __pthread_cond_signal (pthread_cond_t *cond) unsigned int grefs1 = atomic_load_relaxed (cond->__data.__g_refs + 1); if ((grefs0 >> 1) == 0 && (grefs1 >> 1) == 0) return 0; - unsigned int flags = atomic_load_relaxed (&cond->__data.__wrefs); + unsigned int flags = atomic_load_relaxed (&cond->__data.__flags); int private = __condvar_get_private (flags); __condvar_acquire_lock (cond, private); diff --git a/nptl/pthread_cond_wait.c b/nptl/pthread_cond_wait.c index 0ee0247874..0993728e5d 100644 --- a/nptl/pthread_cond_wait.c +++ b/nptl/pthread_cond_wait.c @@ -164,9 +164,9 @@ __condvar_cleanup_waiting (void *arg) determine when all waiters have woken. Since we will do more work in this function, we are using an extra channel to communicate to pthread_cond_destroy that it is not allowed to finish yet: We increment - the fourth bit on __wrefs. Relaxed MO is enough. The synchronization + the fourth bit on __flags. Relaxed MO is enough. The synchronization happens because __condvar_dec_grefs uses release MO. */ - atomic_fetch_add_relaxed (&cond->__data.__wrefs, 8); + atomic_fetch_add_relaxed (&cond->__data.__flags, 8); __condvar_dec_grefs (cond, g, cbuffer->private); __condvar_cancel_waiting (cond, cbuffer->wseq >> 1, g, cbuffer->private); @@ -182,8 +182,8 @@ __condvar_cleanup_waiting (void *arg) are the last waiter (prior value of __wrefs was 1 << 3), then wake any threads waiting in pthread_cond_destroy. Release MO to synchronize with these threads. Don't bother clearing the wake-up request flag. */ - if ((atomic_fetch_add_release (&cond->__data.__wrefs, -8) >> 2) == 3) - futex_wake (&cond->__data.__wrefs, INT_MAX, cbuffer->private); + if ((atomic_fetch_add_release (&cond->__data.__flags, -8) >> 2) == 3) + futex_wake (&cond->__data.__flags, INT_MAX, cbuffer->private); /* XXX If locking the mutex fails, should we just stop execution? This might be better than silently ignoring the error. */ @@ -287,13 +287,13 @@ __condvar_cleanup_waiting (void *arg) __g1_orig_size: Initial size of G1 * The two least-significant bits represent the condvar-internal lock. * Only accessed while having acquired the condvar-internal lock. - __wrefs: Flags and count of waiters who called pthread_cancel. + __flags: Flags and count of waiters who called pthread_cancel. * Bit 2 is true if waiters should run futex_wake when they remove the last reference. pthread_cond_destroy uses this as futex word. * Bit 1 is the clock ID (0 == CLOCK_REALTIME, 1 == CLOCK_MONOTONIC). * Bit 0 is true iff this is a process-shared condvar. * Simple reference count used by __condvar_cleanup_waiting and pthread_cond_destroy. - (If the format of __wrefs is changed, update the pretty printers.) + (If the format of __flags is changed, update the pretty printers.) For each of the two groups, we have: __g_refs: Futex waiter reference count. * LSB is true if waiters should run futex_wake when they remove the @@ -410,7 +410,7 @@ __pthread_cond_wait_common (pthread_cond_t *cond, pthread_mutex_t *mutex, synchronize with the dummy read-modify-write in __condvar_quiesce_and_switch_g1 if we read from that. */ atomic_fetch_add_acquire (cond->__data.__g_refs + g, 2); - unsigned int flags = atomic_load_relaxed (&cond->__data.__wrefs); + unsigned int flags = atomic_load_relaxed (&cond->__data.__flags); int private = __condvar_get_private (flags); /* Now that we are registered as a waiter, we can release the mutex. @@ -558,7 +558,7 @@ __pthread_cond_timedwait64 (pthread_cond_t *cond, pthread_mutex_t *mutex, /* Relaxed MO is suffice because clock ID bit is only modified in condition creation. */ - unsigned int flags = atomic_load_relaxed (&cond->__data.__wrefs); + unsigned int flags = atomic_load_relaxed (&cond->__data.__flags); clockid_t clockid = (flags & __PTHREAD_COND_CLOCK_MONOTONIC_MASK) ? CLOCK_MONOTONIC : CLOCK_REALTIME; return __pthread_cond_wait_common (cond, mutex, clockid, abstime); diff --git a/nptl/tst-cond22.c b/nptl/tst-cond22.c index 64f19ea0a5..e1338ebf94 100644 --- a/nptl/tst-cond22.c +++ b/nptl/tst-cond22.c @@ -110,7 +110,7 @@ do_test (void) c.__data.__wseq, c.__data.__g1_start, c.__data.__g_signals[0], c.__data.__g_refs[0], c.__data.__g_size[0], c.__data.__g_signals[1], c.__data.__g_refs[1], c.__data.__g_size[1], - c.__data.__g1_orig_size, c.__data.__wrefs); + c.__data.__g1_orig_size, c.__data.__flags); if (pthread_create (&th, NULL, tf, (void *) 1l) != 0) { @@ -153,7 +153,7 @@ do_test (void) c.__data.__wseq, c.__data.__g1_start, c.__data.__g_signals[0], c.__data.__g_refs[0], c.__data.__g_size[0], c.__data.__g_signals[1], c.__data.__g_refs[1], c.__data.__g_size[1], - c.__data.__g1_orig_size, c.__data.__wrefs); + c.__data.__g1_orig_size, c.__data.__flags); return status; } diff --git a/sysdeps/nptl/bits/thread-shared-types.h b/sysdeps/nptl/bits/thread-shared-types.h index fbbdd0bb36..84cedfcaa0 100644 --- a/sysdeps/nptl/bits/thread-shared-types.h +++ b/sysdeps/nptl/bits/thread-shared-types.h @@ -112,7 +112,7 @@ struct __pthread_cond_s unsigned int __g_refs[2] __LOCK_ALIGNMENT; unsigned int __g_size[2]; unsigned int __g1_orig_size; - unsigned int __wrefs; + unsigned int __flags; unsigned int __g_signals[2]; };