From patchwork Thu Jul 10 18:27:36 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roland McGrath X-Patchwork-Id: 2001 Received: (qmail 5094 invoked by alias); 10 Jul 2014 18:27:42 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 5082 invoked by uid 89); 10 Jul 2014 18:27:41 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.2 required=5.0 tests=AWL, BAYES_00 autolearn=ham version=3.3.2 X-HELO: topped-with-meat.com MIME-Version: 1.0 From: Roland McGrath To: "GNU C. Library" Subject: [COMMITTED PATCH] Get rid of lll_robust_dead. Message-Id: <20140710182736.AA5CE2C398A@topped-with-meat.com> Date: Thu, 10 Jul 2014 11:27:36 -0700 (PDT) X-CMAE-Score: 0 X-CMAE-Analysis: v=2.1 cv=SvUDtp+0 c=1 sm=1 tr=0 a=WkljmVdYkabdwxfqvArNOQ==:117 a=14OXPxybAAAA:8 a=5W061WIzgfAA:10 a=Z6MIti7PxpgA:10 a=kj9zAlcOel0A:10 a=hOe2yjtxAAAA:8 a=uWBIksUhGsqghE2MabsA:9 a=CjuIK1q_8ugA:10 As with lll_robust_trylock, there is no meaningful variation across machines (actually no semantic variation at all, whereas lll_robust_trylock had a bug on most machines) and the generic code's sole caller already assumes exactly what it must do. On i686 and x86_64, which had hand-written assembly versions, this changed no generated code (except for an assertion line number). On other machines, the chance of any actual change is even smaller. Thanks, Roland * nptl/pthread_create.c (start_thread): Use atomic_or and lll_futex_wake directly rather than lll_robust_dead. * sysdeps/unix/sysv/linux/aarch64/lowlevellock.h (lll_robust_dead): Macro removed. * sysdeps/unix/sysv/linux/alpha/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/arm/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/hppa/nptl/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/i386/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/ia64/nptl/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/m68k/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/microblaze/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/mips/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/powerpc/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/s390/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/sh/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/sparc/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/tile/lowlevellock.h: Likewise. * sysdeps/unix/sysv/linux/x86_64/lowlevellock.h: Likewise. --- a/nptl/pthread_create.c +++ b/nptl/pthread_create.c @@ -390,7 +390,8 @@ start_thread (void *arg) # endif this->__list.__next = NULL; - lll_robust_dead (this->__lock, /* XYZ */ LLL_SHARED); + atomic_or (&this->__lock, FUTEX_OWNER_DIED); + lll_futex_wake (this->__lock, 1, /* XYZ */ LLL_SHARED); } while (robust != (void *) &pd->robust_head); } --- a/sysdeps/unix/sysv/linux/aarch64/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/aarch64/lowlevellock.h @@ -111,15 +111,6 @@ __ret; \ }) -#define lll_robust_dead(futexv, private) \ - do \ - { \ - int *__futexp = &(futexv); \ - atomic_or (__futexp, FUTEX_OWNER_DIED); \ - lll_futex_wake (__futexp, 1, private); \ - } \ - while (0) - /* Returns non-zero if error happened, zero if success. */ #define lll_futex_requeue(futexp, nr_wake, nr_move, mutex, val, private) \ ({ \ --- a/sysdeps/unix/sysv/linux/alpha/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/alpha/lowlevellock.h @@ -113,15 +113,6 @@ INTERNAL_SYSCALL_ERROR_P (__ret, __err)? -__ret : __ret; \ }) -#define lll_robust_dead(futexv, private) \ - do \ - { \ - int *__futexp = &(futexv); \ - atomic_or (__futexp, FUTEX_OWNER_DIED); \ - lll_futex_wake (__futexp, 1, private); \ - } \ - while (0) - /* Returns non-zero if error happened, zero if success. */ #define lll_futex_requeue(futexp, nr_wake, nr_move, mutex, val, private) \ ({ \ --- a/sysdeps/unix/sysv/linux/arm/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/arm/lowlevellock.h @@ -110,15 +110,6 @@ __ret; \ }) -#define lll_robust_dead(futexv, private) \ - do \ - { \ - int *__futexp = &(futexv); \ - atomic_or (__futexp, FUTEX_OWNER_DIED); \ - lll_futex_wake (__futexp, 1, private); \ - } \ - while (0) - /* Returns non-zero if error happened, zero if success. */ #define lll_futex_requeue(futexp, nr_wake, nr_move, mutex, val, private) \ ({ \ --- a/sysdeps/unix/sysv/linux/hppa/nptl/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/hppa/nptl/lowlevellock.h @@ -144,15 +144,6 @@ typedef int lll_lock_t; __ret; \ }) -#define lll_robust_dead(futexv, private) \ - do \ - { \ - int *__futexp = &(futexv); \ - atomic_or (__futexp, FUTEX_OWNER_DIED); \ - lll_futex_wake (__futexp, 1, private); \ - } \ - while (0) - /* Returns non-zero if error happened, zero if success. */ #define lll_futex_wake_unlock(futexp, nr_wake, nr_wake2, futexp2, private) \ ({ \ --- a/sysdeps/unix/sysv/linux/i386/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/i386/lowlevellock.h @@ -377,21 +377,6 @@ extern int __lll_timedlock_elision (int *futex, short *adapt_count, }) -#define lll_robust_dead(futex, private) \ - (void) \ - ({ int __ignore; \ - register int _nr asm ("edx") = 1; \ - __asm __volatile (LOCK_INSTR "orl %5, (%2)\n\t" \ - LLL_EBX_LOAD \ - LLL_ENTER_KERNEL \ - LLL_EBX_LOAD \ - : "=a" (__ignore) \ - : "0" (SYS_futex), LLL_EBX_REG (&(futex)), \ - "c" (__lll_private_flag (FUTEX_WAKE, private)), \ - "d" (_nr), "i" (FUTEX_OWNER_DIED), \ - "i" (offsetof (tcbhead_t, sysinfo))); \ - }) - #define lll_islocked(futex) \ (futex != LLL_LOCK_INITIALIZER) --- a/sysdeps/unix/sysv/linux/ia64/nptl/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/ia64/nptl/lowlevellock.h @@ -110,16 +110,6 @@ _r10 == -1 ? -_retval : _retval; \ }) -#define lll_robust_dead(futexv, private) \ -do \ - { \ - int *__futexp = &(futexv); \ - atomic_or (__futexp, FUTEX_OWNER_DIED); \ - DO_INLINE_SYSCALL(futex, 3, (long) __futexp, \ - __lll_private_flag (FUTEX_WAKE, private), 1); \ - } \ -while (0) - /* Returns non-zero if error happened, zero if success. */ #define lll_futex_requeue(ftx, nr_wake, nr_move, mutex, val, private) \ ({ \ --- a/sysdeps/unix/sysv/linux/m68k/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/m68k/lowlevellock.h @@ -112,15 +112,6 @@ __ret; \ }) -#define lll_robust_dead(futexv, private) \ - do \ - { \ - int *__futexp = &(futexv); \ - atomic_or (__futexp, FUTEX_OWNER_DIED); \ - lll_futex_wake (__futexp, 1, private); \ - } \ - while (0) - /* Returns non-zero if error happened, zero if success. */ #define lll_futex_requeue(futexp, nr_wake, nr_move, mutex, val, private) \ ({ \ --- a/sysdeps/unix/sysv/linux/microblaze/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/microblaze/lowlevellock.h @@ -112,15 +112,6 @@ __ret; \ }) -#define lll_robust_dead(futexv, private) \ - do \ - { \ - int *__futexp = &(futexv); \ - atomic_or (__futexp, FUTEX_OWNER_DIED); \ - lll_futex_wake (__futexp, 1, private); \ - } \ - while (0) - /* Returns non-zero if error happened, zero if success. */ #define lll_futex_requeue(futexp, nr_wake, nr_move, mutex, val, private) \ ({ \ --- a/sysdeps/unix/sysv/linux/mips/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/mips/lowlevellock.h @@ -110,15 +110,6 @@ INTERNAL_SYSCALL_ERROR_P (__ret, __err) ? -__ret : __ret; \ }) -#define lll_robust_dead(futexv, private) \ - do \ - { \ - int *__futexp = &(futexv); \ - atomic_or (__futexp, FUTEX_OWNER_DIED); \ - lll_futex_wake (__futexp, 1, private); \ - } \ - while (0) - /* Returns non-zero if error happened, zero if success. */ #define lll_futex_requeue(futexp, nr_wake, nr_move, mutex, val, private) \ ({ \ --- a/sysdeps/unix/sysv/linux/powerpc/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/powerpc/lowlevellock.h @@ -114,18 +114,6 @@ INTERNAL_SYSCALL_ERROR_P (__ret, __err) ? -__ret : __ret; \ }) -#define lll_robust_dead(futexv, private) \ - do \ - { \ - INTERNAL_SYSCALL_DECL (__err); \ - int *__futexp = &(futexv); \ - \ - atomic_or (__futexp, FUTEX_OWNER_DIED); \ - INTERNAL_SYSCALL (futex, __err, 4, __futexp, \ - __lll_private_flag (FUTEX_WAKE, private), 1, 0); \ - } \ - while (0) - /* Returns non-zero if error happened, zero if success. */ #define lll_futex_requeue(futexp, nr_wake, nr_move, mutex, val, private) \ ({ \ --- a/sysdeps/unix/sysv/linux/s390/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/s390/lowlevellock.h @@ -107,16 +107,6 @@ (nr), 0); \ }) -#define lll_robust_dead(futexv, private) \ - do \ - { \ - int *__futexp = &(futexv); \ - \ - atomic_or (__futexp, FUTEX_OWNER_DIED); \ - lll_futex_wake (__futexp, 1, private); \ - } \ - while (0) - /* Returns non-zero if error happened, zero if success. */ #define lll_futex_requeue(futexp, nr_wake, nr_move, mutex, val, private) \ --- a/sysdeps/unix/sysv/linux/sh/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/sh/lowlevellock.h @@ -301,21 +301,6 @@ extern int __lll_unlock_wake (int *__futex, int private) attribute_hidden; if (__result) \ __lll_unlock_wake (__futex, private); }) -#define lll_robust_dead(futex, private) \ - (void) ({ int __ignore, *__futex = &(futex); \ - __asm __volatile ("\ - .align 2\n\ - mova 1f,r0\n\ - mov r15,r1\n\ - mov #-6,r15\n\ - 0: mov.l @%1,%0\n\ - or %2,%0\n\ - mov.l %0,@%1\n\ - 1: mov r1,r15"\ - : "=&r" (__ignore) : "r" (__futex), "r" (FUTEX_OWNER_DIED) \ - : "r0", "r1", "memory"); \ - lll_futex_wake (__futex, 1, private); }) - # ifdef NEED_SYSCALL_INST_PAD # define SYSCALL_WITH_INST_PAD "\ trapa #0x14; or r0,r0; or r0,r0; or r0,r0; or r0,r0; or r0,r0" --- a/sysdeps/unix/sysv/linux/sparc/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/sparc/lowlevellock.h @@ -132,15 +132,6 @@ extern void __cpu_relax (void); INTERNAL_SYSCALL_ERROR_P (__ret, __err); \ }) -#define lll_robust_dead(futexv, private) \ - do \ - { \ - int *__futexp = &(futexv); \ - atomic_or (__futexp, FUTEX_OWNER_DIED); \ - lll_futex_wake (__futexp, 1, private); \ - } \ - while (0) - /* Returns non-zero if error happened, zero if success. */ #ifdef __sparc32_atomic_do_lock /* Avoid FUTEX_WAKE_OP if supporting pre-v9 CPUs. */ --- a/sysdeps/unix/sysv/linux/tile/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/tile/lowlevellock.h @@ -109,15 +109,6 @@ (nr), 0); \ }) -#define lll_robust_dead(futexv, private) \ - do \ - { \ - int *__futexp = &(futexv); \ - atomic_or (__futexp, FUTEX_OWNER_DIED); \ - lll_futex_wake (__futexp, 1, private); \ - } \ - while (0) - /* Returns non-zero if error happened, zero if success. */ #define lll_futex_requeue(futexp, nr_wake, nr_move, mutex, val, private) \ ({ \ --- a/sysdeps/unix/sysv/linux/x86_64/lowlevellock.h +++ b/sysdeps/unix/sysv/linux/x86_64/lowlevellock.h @@ -378,20 +378,6 @@ extern int __lll_timedlock_elision (int *futex, short *adapt_count, } \ while (0) -#define lll_robust_dead(futex, private) \ - do \ - { \ - int ignore; \ - __asm __volatile (LOCK_INSTR "orl %3, (%2)\n\t" \ - "syscall" \ - : "=m" (futex), "=a" (ignore) \ - : "D" (&(futex)), "i" (FUTEX_OWNER_DIED), \ - "S" (__lll_private_flag (FUTEX_WAKE, private)), \ - "1" (__NR_futex), "d" (1) \ - : "cx", "r11", "cc", "memory"); \ - } \ - while (0) - /* Returns non-zero if error happened, zero if success. */ #define lll_futex_requeue(ftx, nr_wake, nr_move, mutex, val, private) \ ({ int __res; \