From patchwork Wed Sep 21 14:11:05 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Adhemerval Zanella X-Patchwork-Id: 15845 Received: (qmail 78751 invoked by alias); 21 Sep 2016 14:11:26 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 78715 invoked by uid 89); 21 Sep 2016 14:11:24 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.4 required=5.0 tests=BAYES_00, RCVD_IN_DNSWL_NONE, RCVD_IN_SORBS_SPAM, SPF_PASS autolearn=no version=3.3.2 spammy=Protection, Negative, 4814 X-HELO: mail-yw0-f169.google.com X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:subject:date:message-id; bh=+rJD39ClkWxpumD1yEvhBl18GYm0xmZBgP1sYQxHq6s=; b=j0Y37TGOI/5vTXLeTwA+NfxNig7e+gulKQlvbHKqDZBji3drZX23Ku/qILU1KX3BJ6 JdbXKn+RXmlbQdq8iDrg565/BkF6Bw8UCGrh/91iNjZum942uINXsmGFBVFM1RzMrJYZ buOUcRsxF4tnyGACPxYHoo7bwdZRFiXqAp0d1nSppF/1sZpLnMZUW9EChJgWLiKqzIDX 3F/1FLcEKSpz0W7qFlYL2Lo2rr7CU6aMZoaYJut9okG9UfUs9IWCOobaU11ds+tI3KZz bk5Nxi6ndjgRVbbSM8ur02kSuVlLXy9FEV6iI6pNZkLL0Oh68GUS1SyJVSGTMykyTNv5 VW1w== X-Gm-Message-State: AE9vXwOotJOKZ01Wq8lZjCyrgZUAxipEukxavCyLWPhjx0A4IBQGPDvMygEz4arjhNQ0ufWW X-Received: by 10.129.108.15 with SMTP id h15mr26736274ywc.341.1474467073009; Wed, 21 Sep 2016 07:11:13 -0700 (PDT) From: Adhemerval Zanella To: libc-alpha@sourceware.org Subject: [PATCH 1/2] nptl: Remove __ASSUME_SET_ROBUST_LIST Date: Wed, 21 Sep 2016 11:11:05 -0300 Message-Id: <1474467066-26814-1-git-send-email-adhemerval.zanella@linaro.org> This patch removes the __ASSUME_SET_ROBUST_LIST usage on nptl generic code. The set_robust_list availability is defined by '__set_robust_list_avail' which is now defined regardless. Its initial value is set to -1 and defined to a positive value if both __NR_set_robust_list is defined and the syscall returns correctly. A subsequent patch is intended to remove the Linux definitions of __ASSUME_SET_ROBUST_LIST. Tested on x86_64. * nptl/nptl-init.c (set_robust_list_not_avail): Remove definition. (__pthread_initialize_minimal_internal): Set __set_robust_list_avail to 1 if syscall returns correctly. (__set_robust_list_avail): Define regardless if __ASSUME_SET_ROBUST_LIST is defined or not. * nptl/pthreadP.h (__set_robust_list_avail): Likewise. * nptl/pthread_create.c (START_THREAD_DEFN): Remove __ASSUME_SET_ROBUST_LIST usage. * nptl/pthread_mutex_init.c (__pthread_mutex_init): Likewise. --- nptl/nptl-init.c | 12 +++--------- nptl/pthreadP.h | 2 -- nptl/pthread_create.c | 8 ++------ nptl/pthread_mutex_init.c | 2 -- 4 files changed, 5 insertions(+), 19 deletions(-) diff --git a/nptl/nptl-init.c b/nptl/nptl-init.c index bdbdfed..6dd658a 100644 --- a/nptl/nptl-init.c +++ b/nptl/nptl-init.c @@ -48,14 +48,8 @@ int *__libc_multiple_threads_ptr attribute_hidden; size_t __static_tls_size; size_t __static_tls_align_m1; -#ifndef __ASSUME_SET_ROBUST_LIST /* Negative if we do not have the system call and we can use it. */ -int __set_robust_list_avail; -# define set_robust_list_not_avail() \ - __set_robust_list_avail = -1 -#else -# define set_robust_list_not_avail() do { } while (0) -#endif +int __set_robust_list_avail = -1; #ifndef __ASSUME_FUTEX_CLOCK_REALTIME /* Nonzero if we do not have FUTEX_CLOCK_REALTIME. */ @@ -335,9 +329,9 @@ __pthread_initialize_minimal_internal (void) INTERNAL_SYSCALL_DECL (err); int res = INTERNAL_SYSCALL (set_robust_list, err, 2, &pd->robust_head, sizeof (struct robust_list_head)); - if (INTERNAL_SYSCALL_ERROR_P (res, err)) + if (!INTERNAL_SYSCALL_ERROR_P (res, err)) + __set_robust_list_avail = 1; #endif - set_robust_list_not_avail (); } #ifdef __NR_futex diff --git a/nptl/pthreadP.h b/nptl/pthreadP.h index 6e0dd09..0ce65f3 100644 --- a/nptl/pthreadP.h +++ b/nptl/pthreadP.h @@ -199,10 +199,8 @@ hidden_proto (__pthread_keys) /* Number of threads running. */ extern unsigned int __nptl_nthreads attribute_hidden; -#ifndef __ASSUME_SET_ROBUST_LIST /* Negative if we do not have the system call and we can use it. */ extern int __set_robust_list_avail attribute_hidden; -#endif /* Thread Priority Protection. */ extern int __sched_fifo_min_prio attribute_hidden; diff --git a/nptl/pthread_create.c b/nptl/pthread_create.c index a834063..04c5e7f 100644 --- a/nptl/pthread_create.c +++ b/nptl/pthread_create.c @@ -271,18 +271,16 @@ START_THREAD_DEFN if (__glibc_unlikely (atomic_exchange_acq (&pd->setxid_futex, 0) == -2)) futex_wake (&pd->setxid_futex, 1, FUTEX_PRIVATE); -#ifdef __NR_set_robust_list -# ifndef __ASSUME_SET_ROBUST_LIST if (__set_robust_list_avail >= 0) -# endif { +#ifdef __NR_set_robust_list INTERNAL_SYSCALL_DECL (err); /* This call should never fail because the initial call in init.c succeeded. */ INTERNAL_SYSCALL (set_robust_list, err, 2, &pd->robust_head, sizeof (struct robust_list_head)); - } #endif + } #ifdef SIGCANCEL /* If the parent was running cancellation handlers while creating @@ -388,7 +386,6 @@ START_THREAD_DEFN the breakpoint reports TD_THR_RUN state rather than TD_THR_ZOMBIE. */ atomic_bit_set (&pd->cancelhandling, EXITING_BIT); -#ifndef __ASSUME_SET_ROBUST_LIST /* If this thread has any robust mutexes locked, handle them now. */ # ifdef __PTHREAD_MUTEX_HAVE_PREV void *robust = pd->robust_head.list; @@ -419,7 +416,6 @@ START_THREAD_DEFN } while (robust != (void *) &pd->robust_head); } -#endif /* Mark the memory of the stack as usable to the kernel. We free everything except for the space used for the TCB itself. */ diff --git a/nptl/pthread_mutex_init.c b/nptl/pthread_mutex_init.c index 6e5acb6..6aef890 100644 --- a/nptl/pthread_mutex_init.c +++ b/nptl/pthread_mutex_init.c @@ -91,11 +91,9 @@ __pthread_mutex_init (pthread_mutex_t *mutex, if ((imutexattr->mutexkind & PTHREAD_MUTEXATTR_FLAG_ROBUST) != 0) { -#ifndef __ASSUME_SET_ROBUST_LIST if ((imutexattr->mutexkind & PTHREAD_MUTEXATTR_FLAG_PSHARED) != 0 && __set_robust_list_avail < 0) return ENOTSUP; -#endif mutex->__data.__kind |= PTHREAD_MUTEX_ROBUST_NORMAL_NP; }