From patchwork Wed Nov 3 16:27:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 47003 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 141A5385800A for ; Wed, 3 Nov 2021 16:29:42 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 141A5385800A DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1635956982; bh=ajpKg+43/0I1OzYesui5wySjC1sLf6jpFPek4Cy415s=; h=To:Subject:In-Reply-To:References:Date:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=aAfO7sf4vYYJY4oob29kfVHMbcOYduhdJRLxQCbmXi9mhI1+tu6b9ZASVDu3/IIzb YKVEIM1rG44ANGj6q6Nsfl6hlvSY9xpptjmC/xSrk6+vF5ZWTO7VRd3+liinKsx8iQ MojJaY8zkgmTJ2z78H6FjBByS1QCZbzDtcofFr9Q= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTPS id 3775B3858439 for ; Wed, 3 Nov 2021 16:27:49 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 3775B3858439 Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-171-W8O6uPkjOHOPFSuyd4ohjg-1; Wed, 03 Nov 2021 12:27:47 -0400 X-MC-Unique: W8O6uPkjOHOPFSuyd4ohjg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id EE893806689; Wed, 3 Nov 2021 16:27:45 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.39.192.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 1816A641AA; Wed, 3 Nov 2021 16:27:43 +0000 (UTC) To: libc-alpha@sourceware.org Subject: [PATCH 1/3] nptl: Extract from pthread_cond_common.c In-Reply-To: References: X-From-Line: d02a4b7aff0e9bec7abae5e15d9ae3ab31f025ba Mon Sep 17 00:00:00 2001 Message-Id: Date: Wed, 03 Nov 2021 17:27:42 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.2 (gnu/linux) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-12.7 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H4, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Florian Weimer via Libc-alpha From: Florian Weimer Reply-To: Florian Weimer Cc: Jakub Jelinek , gcc-patches@gcc.gnu.org, Jason Merrill Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" And make it an installed header. This addresses a few aliasing violations (which do not seem to result in miscompilation due to the use of atomics), and also enables use of wide counters in other parts of the library. The debug output in nptl/tst-cond22 has been adjusted to print the 32-bit values instead because it avoids a big-endian/little-endian difference. Reviewed-by: Adhemerval Zanella --- bits/atomic_wide_counter.h | 35 ++++ include/atomic_wide_counter.h | 89 +++++++++++ include/bits/atomic_wide_counter.h | 1 + misc/Makefile | 3 +- misc/atomic_wide_counter.c | 127 +++++++++++++++ nptl/Makefile | 13 +- nptl/pthread_cond_common.c | 204 ++++-------------------- nptl/tst-cond22.c | 14 +- sysdeps/nptl/bits/thread-shared-types.h | 22 +-- 9 files changed, 310 insertions(+), 198 deletions(-) create mode 100644 bits/atomic_wide_counter.h create mode 100644 include/atomic_wide_counter.h create mode 100644 include/bits/atomic_wide_counter.h create mode 100644 misc/atomic_wide_counter.c diff --git a/bits/atomic_wide_counter.h b/bits/atomic_wide_counter.h new file mode 100644 index 0000000000..0687eb554e --- /dev/null +++ b/bits/atomic_wide_counter.h @@ -0,0 +1,35 @@ +/* Monotonically increasing wide counters (at least 62 bits). + Copyright (C) 2016-2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _BITS_ATOMIC_WIDE_COUNTER_H +#define _BITS_ATOMIC_WIDE_COUNTER_H + +/* Counter that is monotonically increasing (by less than 2**31 per + increment), with a single writer, and an arbitrary number of + readers. */ +typedef union +{ + __extension__ unsigned long long int __value64; + struct + { + unsigned int __low; + unsigned int __high; + } __value32; +} __atomic_wide_counter; + +#endif /* _BITS_ATOMIC_WIDE_COUNTER_H */ diff --git a/include/atomic_wide_counter.h b/include/atomic_wide_counter.h new file mode 100644 index 0000000000..31f009d5e6 --- /dev/null +++ b/include/atomic_wide_counter.h @@ -0,0 +1,89 @@ +/* Monotonically increasing wide counters (at least 62 bits). + Copyright (C) 2016-2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _ATOMIC_WIDE_COUNTER_H +#define _ATOMIC_WIDE_COUNTER_H + +#include +#include + +#if __HAVE_64B_ATOMICS + +static inline uint64_t +__atomic_wide_counter_load_relaxed (__atomic_wide_counter *c) +{ + return atomic_load_relaxed (&c->__value64); +} + +static inline uint64_t +__atomic_wide_counter_fetch_add_relaxed (__atomic_wide_counter *c, + unsigned int val) +{ + return atomic_fetch_add_relaxed (&c->__value64, val); +} + +static inline uint64_t +__atomic_wide_counter_fetch_add_acquire (__atomic_wide_counter *c, + unsigned int val) +{ + return atomic_fetch_add_acquire (&c->__value64, val); +} + +static inline void +__atomic_wide_counter_add_relaxed (__atomic_wide_counter *c, + unsigned int val) +{ + atomic_store_relaxed (&c->__value64, + atomic_load_relaxed (&c->__value64) + val); +} + +static uint64_t __attribute__ ((unused)) +__atomic_wide_counter_fetch_xor_release (__atomic_wide_counter *c, + unsigned int val) +{ + return atomic_fetch_xor_release (&c->__value64, val); +} + +#else /* !__HAVE_64B_ATOMICS */ + +uint64_t __atomic_wide_counter_load_relaxed (__atomic_wide_counter *c) + attribute_hidden; + +uint64_t __atomic_wide_counter_fetch_add_relaxed (__atomic_wide_counter *c, + unsigned int op) + attribute_hidden; + +static inline uint64_t +__atomic_wide_counter_fetch_add_acquire (__atomic_wide_counter *c, + unsigned int val) +{ + uint64_t r = __atomic_wide_counter_fetch_add_relaxed (c, val); + atomic_thread_fence_acquire (); + return r; +} + +static inline void +__atomic_wide_counter_add_relaxed (__atomic_wide_counter *c, + unsigned int val) +{ + __atomic_wide_counter_fetch_add_relaxed (c, val); +} + +#endif /* !__HAVE_64B_ATOMICS */ + +#endif /* _ATOMIC_WIDE_COUNTER_H */ diff --git a/include/bits/atomic_wide_counter.h b/include/bits/atomic_wide_counter.h new file mode 100644 index 0000000000..8fb09a5291 --- /dev/null +++ b/include/bits/atomic_wide_counter.h @@ -0,0 +1 @@ +#include_next diff --git a/misc/Makefile b/misc/Makefile index 1083ba3bfc..3b66cb9f6a 100644 --- a/misc/Makefile +++ b/misc/Makefile @@ -73,7 +73,8 @@ routines := brk sbrk sstk ioctl \ fgetxattr flistxattr fremovexattr fsetxattr getxattr \ listxattr lgetxattr llistxattr lremovexattr lsetxattr \ removexattr setxattr getauxval ifunc-impl-list makedev \ - allocate_once fd_to_filename single_threaded unwind-link + allocate_once fd_to_filename single_threaded unwind-link \ + atomic_wide_counter generated += tst-error1.mtrace tst-error1-mem.out \ tst-allocate_once.mtrace tst-allocate_once-mem.out diff --git a/misc/atomic_wide_counter.c b/misc/atomic_wide_counter.c new file mode 100644 index 0000000000..56d8981925 --- /dev/null +++ b/misc/atomic_wide_counter.c @@ -0,0 +1,127 @@ +/* Monotonically increasing wide counters (at least 62 bits). + Copyright (C) 2016-2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include + +#if !__HAVE_64B_ATOMICS + +/* Values we add or xor are less than or equal to 1<<31, so we only + have to make overflow-and-addition atomic wrt. to concurrent load + operations and xor operations. To do that, we split each counter + into two 32b values of which we reserve the MSB of each to + represent an overflow from the lower-order half to the higher-order + half. + + In the common case, the state is (higher-order / lower-order half, and . is + basically concatenation of the bits): + 0.h / 0.l = h.l + + When we add a value of x that overflows (i.e., 0.l + x == 1.L), we run the + following steps S1-S4 (the values these represent are on the right-hand + side): + S1: 0.h / 1.L == (h+1).L + S2: 1.(h+1) / 1.L == (h+1).L + S3: 1.(h+1) / 0.L == (h+1).L + S4: 0.(h+1) / 0.L == (h+1).L + If the LSB of the higher-order half is set, readers will ignore the + overflow bit in the lower-order half. + + To get an atomic snapshot in load operations, we exploit that the + higher-order half is monotonically increasing; if we load a value V from + it, then read the lower-order half, and then read the higher-order half + again and see the same value V, we know that both halves have existed in + the sequence of values the full counter had. This is similar to the + validated reads in the time-based STMs in GCC's libitm (e.g., + method_ml_wt). + + One benefit of this scheme is that this makes load operations + obstruction-free because unlike if we would just lock the counter, readers + can almost always interpret a snapshot of each halves. Readers can be + forced to read a new snapshot when the read is concurrent with an overflow. + However, overflows will happen infrequently, so load operations are + practically lock-free. */ + +uint64_t +__atomic_wide_counter_fetch_add_relaxed (__atomic_wide_counter *c, + unsigned int op) +{ + /* S1. Note that this is an atomic read-modify-write so it extends the + release sequence of release MO store at S3. */ + unsigned int l = atomic_fetch_add_relaxed (&c->__value32.__low, op); + unsigned int h = atomic_load_relaxed (&c->__value32.__high); + uint64_t result = ((uint64_t) h << 31) | l; + l += op; + if ((l >> 31) > 0) + { + /* Overflow. Need to increment higher-order half. Note that all + add operations are ordered in happens-before. */ + h++; + /* S2. Release MO to synchronize with the loads of the higher-order half + in the load operation. See __condvar_load_64_relaxed. */ + atomic_store_release (&c->__value32.__high, + h | ((unsigned int) 1 << 31)); + l ^= (unsigned int) 1 << 31; + /* S3. See __condvar_load_64_relaxed. */ + atomic_store_release (&c->__value32.__low, l); + /* S4. Likewise. */ + atomic_store_release (&c->__value32.__high, h); + } + return result; +} + +uint64_t +__atomic_wide_counter_load_relaxed (__atomic_wide_counter *c) +{ + unsigned int h, l, h2; + do + { + /* This load and the second one below to the same location read from the + stores in the overflow handling of the add operation or the + initializing stores (which is a simple special case because + initialization always completely happens before further use). + Because no two stores to the higher-order half write the same value, + the loop ensures that if we continue to use the snapshot, this load + and the second one read from the same store operation. All candidate + store operations have release MO. + If we read from S2 in the first load, then we will see the value of + S1 on the next load (because we synchronize with S2), or a value + later in modification order. We correctly ignore the lower-half's + overflow bit in this case. If we read from S4, then we will see the + value of S3 in the next load (or a later value), which does not have + the overflow bit set anymore. + */ + h = atomic_load_acquire (&c->__value32.__high); + /* This will read from the release sequence of S3 (i.e, either the S3 + store or the read-modify-writes at S1 following S3 in modification + order). Thus, the read synchronizes with S3, and the following load + of the higher-order half will read from the matching S2 (or a later + value). + Thus, if we read a lower-half value here that already overflowed and + belongs to an increased higher-order half value, we will see the + latter and h and h2 will not be equal. */ + l = atomic_load_acquire (&c->__value32.__low); + /* See above. */ + h2 = atomic_load_relaxed (&c->__value32.__high); + } + while (h != h2); + if (((l >> 31) > 0) && ((h >> 31) > 0)) + l ^= (unsigned int) 1 << 31; + return ((uint64_t) (h & ~((unsigned int) 1 << 31)) << 31) + l; +} + +#endif /* !__HAVE_64B_ATOMICS */ diff --git a/nptl/Makefile b/nptl/Makefile index ff4d590f11..6310aecaad 100644 --- a/nptl/Makefile +++ b/nptl/Makefile @@ -22,8 +22,14 @@ subdir := nptl include ../Makeconfig -headers := pthread.h semaphore.h bits/semaphore.h \ - bits/struct_mutex.h bits/struct_rwlock.h +headers := \ + bits/atomic_wide_counter.h \ + bits/semaphore.h \ + bits/struct_mutex.h \ + bits/struct_rwlock.h \ + pthread.h \ + semaphore.h \ + # headers extra-libs := libpthread extra-libs-others := $(extra-libs) @@ -270,7 +276,7 @@ tests = tst-attr2 tst-attr3 tst-default-attr \ tst-mutexpi1 tst-mutexpi2 tst-mutexpi3 tst-mutexpi4 \ tst-mutexpi5 tst-mutexpi5a tst-mutexpi6 tst-mutexpi7 tst-mutexpi7a \ tst-mutexpi9 tst-mutexpi10 \ - tst-cond22 tst-cond26 \ + tst-cond26 \ tst-robustpi1 tst-robustpi2 tst-robustpi3 tst-robustpi4 tst-robustpi5 \ tst-robustpi6 tst-robustpi7 tst-robustpi9 \ tst-rwlock2 tst-rwlock2a tst-rwlock2b tst-rwlock3 \ @@ -319,6 +325,7 @@ tests-internal := tst-robustpi8 tst-rwlock19 tst-rwlock20 \ tst-barrier5 tst-signal7 tst-mutex8 tst-mutex8-static \ tst-mutexpi8 tst-mutexpi8-static \ tst-setgetname \ + tst-cond22 \ xtests = tst-setuid1 tst-setuid1-static tst-setuid2 \ tst-mutexpp1 tst-mutexpp6 tst-mutexpp10 tst-setgroups \ diff --git a/nptl/pthread_cond_common.c b/nptl/pthread_cond_common.c index c35b9ef03a..fb56f93b6e 100644 --- a/nptl/pthread_cond_common.c +++ b/nptl/pthread_cond_common.c @@ -17,79 +17,52 @@ . */ #include +#include #include #include -/* We need 3 least-significant bits on __wrefs for something else. */ +/* We need 3 least-significant bits on __wrefs for something else. + This also matches __atomic_wide_counter requirements: The highest + value we add is __PTHREAD_COND_MAX_GROUP_SIZE << 2 to __g1_start + (the two extra bits are for the lock in the two LSBs of + __g1_start). */ #define __PTHREAD_COND_MAX_GROUP_SIZE ((unsigned) 1 << 29) -#if __HAVE_64B_ATOMICS == 1 - -static uint64_t __attribute__ ((unused)) +static inline uint64_t __condvar_load_wseq_relaxed (pthread_cond_t *cond) { - return atomic_load_relaxed (&cond->__data.__wseq); + return __atomic_wide_counter_load_relaxed (&cond->__data.__wseq); } -static uint64_t __attribute__ ((unused)) +static inline uint64_t __condvar_fetch_add_wseq_acquire (pthread_cond_t *cond, unsigned int val) { - return atomic_fetch_add_acquire (&cond->__data.__wseq, val); + return __atomic_wide_counter_fetch_add_acquire (&cond->__data.__wseq, val); } -static uint64_t __attribute__ ((unused)) -__condvar_fetch_xor_wseq_release (pthread_cond_t *cond, unsigned int val) +static inline uint64_t +__condvar_load_g1_start_relaxed (pthread_cond_t *cond) { - return atomic_fetch_xor_release (&cond->__data.__wseq, val); + return __atomic_wide_counter_load_relaxed (&cond->__data.__g1_start); } -static uint64_t __attribute__ ((unused)) -__condvar_load_g1_start_relaxed (pthread_cond_t *cond) +static inline void +__condvar_add_g1_start_relaxed (pthread_cond_t *cond, unsigned int val) { - return atomic_load_relaxed (&cond->__data.__g1_start); + __atomic_wide_counter_add_relaxed (&cond->__data.__g1_start, val); } -static void __attribute__ ((unused)) -__condvar_add_g1_start_relaxed (pthread_cond_t *cond, unsigned int val) +#if __HAVE_64B_ATOMICS == 1 + +static inline uint64_t +__condvar_fetch_xor_wseq_release (pthread_cond_t *cond, unsigned int val) { - atomic_store_relaxed (&cond->__data.__g1_start, - atomic_load_relaxed (&cond->__data.__g1_start) + val); + return atomic_fetch_xor_release (&cond->__data.__wseq.__value64, val); } -#else - -/* We use two 64b counters: __wseq and __g1_start. They are monotonically - increasing and single-writer-multiple-readers counters, so we can implement - load, fetch-and-add, and fetch-and-xor operations even when we just have - 32b atomics. Values we add or xor are less than or equal to 1<<31 (*), - so we only have to make overflow-and-addition atomic wrt. to concurrent - load operations and xor operations. To do that, we split each counter into - two 32b values of which we reserve the MSB of each to represent an - overflow from the lower-order half to the higher-order half. - - In the common case, the state is (higher-order / lower-order half, and . is - basically concatenation of the bits): - 0.h / 0.l = h.l - - When we add a value of x that overflows (i.e., 0.l + x == 1.L), we run the - following steps S1-S4 (the values these represent are on the right-hand - side): - S1: 0.h / 1.L == (h+1).L - S2: 1.(h+1) / 1.L == (h+1).L - S3: 1.(h+1) / 0.L == (h+1).L - S4: 0.(h+1) / 0.L == (h+1).L - If the LSB of the higher-order half is set, readers will ignore the - overflow bit in the lower-order half. - - To get an atomic snapshot in load operations, we exploit that the - higher-order half is monotonically increasing; if we load a value V from - it, then read the lower-order half, and then read the higher-order half - again and see the same value V, we know that both halves have existed in - the sequence of values the full counter had. This is similar to the - validated reads in the time-based STMs in GCC's libitm (e.g., - method_ml_wt). - - The xor operation needs to be an atomic read-modify-write. The write +#else /* !__HAVE_64B_ATOMICS */ + +/* The xor operation needs to be an atomic read-modify-write. The write itself is not an issue as it affects just the lower-order half but not bits used in the add operation. To make the full fetch-and-xor atomic, we exploit that concurrently, the value can increase by at most 1<<31 (*): The @@ -97,117 +70,18 @@ __condvar_add_g1_start_relaxed (pthread_cond_t *cond, unsigned int val) than __PTHREAD_COND_MAX_GROUP_SIZE waiters can enter concurrently and thus increment __wseq. Therefore, if the xor operation observes a value of __wseq, then the value it applies the modification to later on can be - derived (see below). - - One benefit of this scheme is that this makes load operations - obstruction-free because unlike if we would just lock the counter, readers - can almost always interpret a snapshot of each halves. Readers can be - forced to read a new snapshot when the read is concurrent with an overflow. - However, overflows will happen infrequently, so load operations are - practically lock-free. - - (*) The highest value we add is __PTHREAD_COND_MAX_GROUP_SIZE << 2 to - __g1_start (the two extra bits are for the lock in the two LSBs of - __g1_start). */ - -typedef struct -{ - unsigned int low; - unsigned int high; -} _condvar_lohi; - -static uint64_t -__condvar_fetch_add_64_relaxed (_condvar_lohi *lh, unsigned int op) -{ - /* S1. Note that this is an atomic read-modify-write so it extends the - release sequence of release MO store at S3. */ - unsigned int l = atomic_fetch_add_relaxed (&lh->low, op); - unsigned int h = atomic_load_relaxed (&lh->high); - uint64_t result = ((uint64_t) h << 31) | l; - l += op; - if ((l >> 31) > 0) - { - /* Overflow. Need to increment higher-order half. Note that all - add operations are ordered in happens-before. */ - h++; - /* S2. Release MO to synchronize with the loads of the higher-order half - in the load operation. See __condvar_load_64_relaxed. */ - atomic_store_release (&lh->high, h | ((unsigned int) 1 << 31)); - l ^= (unsigned int) 1 << 31; - /* S3. See __condvar_load_64_relaxed. */ - atomic_store_release (&lh->low, l); - /* S4. Likewise. */ - atomic_store_release (&lh->high, h); - } - return result; -} - -static uint64_t -__condvar_load_64_relaxed (_condvar_lohi *lh) -{ - unsigned int h, l, h2; - do - { - /* This load and the second one below to the same location read from the - stores in the overflow handling of the add operation or the - initializing stores (which is a simple special case because - initialization always completely happens before further use). - Because no two stores to the higher-order half write the same value, - the loop ensures that if we continue to use the snapshot, this load - and the second one read from the same store operation. All candidate - store operations have release MO. - If we read from S2 in the first load, then we will see the value of - S1 on the next load (because we synchronize with S2), or a value - later in modification order. We correctly ignore the lower-half's - overflow bit in this case. If we read from S4, then we will see the - value of S3 in the next load (or a later value), which does not have - the overflow bit set anymore. - */ - h = atomic_load_acquire (&lh->high); - /* This will read from the release sequence of S3 (i.e, either the S3 - store or the read-modify-writes at S1 following S3 in modification - order). Thus, the read synchronizes with S3, and the following load - of the higher-order half will read from the matching S2 (or a later - value). - Thus, if we read a lower-half value here that already overflowed and - belongs to an increased higher-order half value, we will see the - latter and h and h2 will not be equal. */ - l = atomic_load_acquire (&lh->low); - /* See above. */ - h2 = atomic_load_relaxed (&lh->high); - } - while (h != h2); - if (((l >> 31) > 0) && ((h >> 31) > 0)) - l ^= (unsigned int) 1 << 31; - return ((uint64_t) (h & ~((unsigned int) 1 << 31)) << 31) + l; -} - -static uint64_t __attribute__ ((unused)) -__condvar_load_wseq_relaxed (pthread_cond_t *cond) -{ - return __condvar_load_64_relaxed ((_condvar_lohi *) &cond->__data.__wseq32); -} - -static uint64_t __attribute__ ((unused)) -__condvar_fetch_add_wseq_acquire (pthread_cond_t *cond, unsigned int val) -{ - uint64_t r = __condvar_fetch_add_64_relaxed - ((_condvar_lohi *) &cond->__data.__wseq32, val); - atomic_thread_fence_acquire (); - return r; -} + derived. */ static uint64_t __attribute__ ((unused)) __condvar_fetch_xor_wseq_release (pthread_cond_t *cond, unsigned int val) { - _condvar_lohi *lh = (_condvar_lohi *) &cond->__data.__wseq32; /* First, get the current value. See __condvar_load_64_relaxed. */ unsigned int h, l, h2; do { - h = atomic_load_acquire (&lh->high); - l = atomic_load_acquire (&lh->low); - h2 = atomic_load_relaxed (&lh->high); + h = atomic_load_acquire (&cond->__data.__wseq.__value32.__high); + l = atomic_load_acquire (&cond->__data.__wseq.__value32.__low); + h2 = atomic_load_relaxed (&cond->__data.__wseq.__value32.__high); } while (h != h2); if (((l >> 31) > 0) && ((h >> 31) == 0)) @@ -219,8 +93,9 @@ __condvar_fetch_xor_wseq_release (pthread_cond_t *cond, unsigned int val) earlier in modification order than the following fetch-xor. This uses release MO to make the full operation have release semantics (all other operations access the lower-order half). */ - unsigned int l2 = atomic_fetch_xor_release (&lh->low, val) - & ~((unsigned int) 1 << 31); + unsigned int l2 + = (atomic_fetch_xor_release (&cond->__data.__wseq.__value32.__low, val) + & ~((unsigned int) 1 << 31)); if (l2 < l) /* The lower-order half overflowed in the meantime. This happened exactly once due to the limit on concurrent waiters (see above). */ @@ -228,22 +103,7 @@ __condvar_fetch_xor_wseq_release (pthread_cond_t *cond, unsigned int val) return ((uint64_t) h << 31) + l2; } -static uint64_t __attribute__ ((unused)) -__condvar_load_g1_start_relaxed (pthread_cond_t *cond) -{ - return __condvar_load_64_relaxed - ((_condvar_lohi *) &cond->__data.__g1_start32); -} - -static void __attribute__ ((unused)) -__condvar_add_g1_start_relaxed (pthread_cond_t *cond, unsigned int val) -{ - ignore_value (__condvar_fetch_add_64_relaxed - ((_condvar_lohi *) &cond->__data.__g1_start32, val)); -} - -#endif /* !__HAVE_64B_ATOMICS */ - +#endif /* !__HAVE_64B_ATOMICS */ /* The lock that signalers use. See pthread_cond_wait_common for uses. The lock is our normal three-state lock: not acquired (0) / acquired (1) / diff --git a/nptl/tst-cond22.c b/nptl/tst-cond22.c index 64f19ea0a5..1336e9c79d 100644 --- a/nptl/tst-cond22.c +++ b/nptl/tst-cond22.c @@ -106,8 +106,11 @@ do_test (void) status = 1; } - printf ("cond = { %llu, %llu, %u/%u/%u, %u/%u/%u, %u, %u }\n", - c.__data.__wseq, c.__data.__g1_start, + printf ("cond = { 0x%x:%x, 0x%x:%x, %u/%u/%u, %u/%u/%u, %u, %u }\n", + c.__data.__wseq.__value32.__high, + c.__data.__wseq.__value32.__low, + c.__data.__g1_start.__value32.__high, + c.__data.__g1_start.__value32.__low, c.__data.__g_signals[0], c.__data.__g_refs[0], c.__data.__g_size[0], c.__data.__g_signals[1], c.__data.__g_refs[1], c.__data.__g_size[1], c.__data.__g1_orig_size, c.__data.__wrefs); @@ -149,8 +152,11 @@ do_test (void) status = 1; } - printf ("cond = { %llu, %llu, %u/%u/%u, %u/%u/%u, %u, %u }\n", - c.__data.__wseq, c.__data.__g1_start, + printf ("cond = { 0x%x:%x, 0x%x:%x, %u/%u/%u, %u/%u/%u, %u, %u }\n", + c.__data.__wseq.__value32.__high, + c.__data.__wseq.__value32.__low, + c.__data.__g1_start.__value32.__high, + c.__data.__g1_start.__value32.__low, c.__data.__g_signals[0], c.__data.__g_refs[0], c.__data.__g_size[0], c.__data.__g_signals[1], c.__data.__g_refs[1], c.__data.__g_size[1], c.__data.__g1_orig_size, c.__data.__wrefs); diff --git a/sysdeps/nptl/bits/thread-shared-types.h b/sysdeps/nptl/bits/thread-shared-types.h index 44bf1e358d..b82a79a43e 100644 --- a/sysdeps/nptl/bits/thread-shared-types.h +++ b/sysdeps/nptl/bits/thread-shared-types.h @@ -43,6 +43,8 @@ #include +#include + /* Common definition of pthread_mutex_t. */ @@ -91,24 +93,8 @@ typedef struct __pthread_internal_slist struct __pthread_cond_s { - __extension__ union - { - __extension__ unsigned long long int __wseq; - struct - { - unsigned int __low; - unsigned int __high; - } __wseq32; - }; - __extension__ union - { - __extension__ unsigned long long int __g1_start; - struct - { - unsigned int __low; - unsigned int __high; - } __g1_start32; - }; + __atomic_wide_counter __wseq; + __atomic_wide_counter __g1_start; unsigned int __g_refs[2] __LOCK_ALIGNMENT; unsigned int __g_size[2]; unsigned int __g1_orig_size; From patchwork Wed Nov 3 16:27:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 47006 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 298B33858D28 for ; Wed, 3 Nov 2021 16:32:27 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 298B33858D28 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1635957147; bh=ilVLKh0yZw+V0mS/6+pzm+UNEDPDFcjyu4ZVChmoUgY=; h=To:Subject:In-Reply-To:References:Date:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=WNQKaRMz4V9m/Z6zH+tmLbP+CG+uwdWqTG80YH7DnW3NPBQYGdqxqQEQlHBI0V3IC C+x9arBV9SppLAmxbEr0Qwjq2Gdr6PVRq8jjO7A5j2PYC+hc3IuEhD75mRwU9rdPFr Aqryrvm1VN+mILWxp6g0koWjm6kkj2CHsO1ol6fM= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by sourceware.org (Postfix) with ESMTPS id 0D566385843C for ; Wed, 3 Nov 2021 16:27:58 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 0D566385843C Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-541-LE8HUvijM9KqpgT-hAW0Xg-1; Wed, 03 Nov 2021 12:27:54 -0400 X-MC-Unique: LE8HUvijM9KqpgT-hAW0Xg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 53C481006AA4; Wed, 3 Nov 2021 16:27:53 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.39.192.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id F008B60FDD; Wed, 3 Nov 2021 16:27:51 +0000 (UTC) To: libc-alpha@sourceware.org Subject: [PATCH 2/3] elf: Introduce GLRO (dl_libc_freeres), called from __libc_freeres In-Reply-To: References: X-From-Line: 0722a1636d8ad26381b8e7092715d81749037f40 Mon Sep 17 00:00:00 2001 Message-Id: <0722a1636d8ad26381b8e7092715d81749037f40.1635954168.git.fweimer@redhat.com> Date: Wed, 03 Nov 2021 17:27:49 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.2 (gnu/linux) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-12.7 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=unavailable autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Florian Weimer via Libc-alpha From: Florian Weimer Reply-To: Florian Weimer Cc: Jakub Jelinek , gcc-patches@gcc.gnu.org, Jason Merrill Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" --- elf/Makefile | 2 +- elf/dl-libc_freeres.c | 24 ++++++++++++++++++++++++ elf/rtld.c | 1 + malloc/set-freeres.c | 5 +++++ sysdeps/generic/ldsodefs.h | 7 +++++++ 5 files changed, 38 insertions(+), 1 deletion(-) create mode 100644 elf/dl-libc_freeres.c Reviewed-by: Adhemerval Zanella diff --git a/elf/Makefile b/elf/Makefile index cb9bcfb799..1c768bdf47 100644 --- a/elf/Makefile +++ b/elf/Makefile @@ -68,7 +68,7 @@ elide-routines.os = $(all-dl-routines) dl-support enbl-secure dl-origin \ rtld-routines = rtld $(all-dl-routines) dl-sysdep dl-environ dl-minimal \ dl-error-minimal dl-conflict dl-hwcaps dl-hwcaps_split dl-hwcaps-subdirs \ dl-usage dl-diagnostics dl-diagnostics-kernel dl-diagnostics-cpu \ - dl-mutex + dl-mutex dl-libc_freeres all-rtld-routines = $(rtld-routines) $(sysdep-rtld-routines) CFLAGS-dl-runtime.c += -fexceptions -fasynchronous-unwind-tables diff --git a/elf/dl-libc_freeres.c b/elf/dl-libc_freeres.c new file mode 100644 index 0000000000..68f305a6f9 --- /dev/null +++ b/elf/dl-libc_freeres.c @@ -0,0 +1,24 @@ +/* Deallocating malloc'ed memory from the dynamic loader. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include + +void +__rtld_libc_freeres (void) +{ +} diff --git a/elf/rtld.c b/elf/rtld.c index be2d5d8e74..847141e21d 100644 --- a/elf/rtld.c +++ b/elf/rtld.c @@ -378,6 +378,7 @@ struct rtld_global_ro _rtld_global_ro attribute_relro = ._dl_catch_error = _rtld_catch_error, ._dl_error_free = _dl_error_free, ._dl_tls_get_addr_soft = _dl_tls_get_addr_soft, + ._dl_libc_freeres = __rtld_libc_freeres, #ifdef HAVE_DL_DISCOVER_OSVERSION ._dl_discover_osversion = _dl_discover_osversion #endif diff --git a/malloc/set-freeres.c b/malloc/set-freeres.c index 5c19a2725c..856ff7831f 100644 --- a/malloc/set-freeres.c +++ b/malloc/set-freeres.c @@ -21,6 +21,7 @@ #include #include #include +#include #include "../nss/nsswitch.h" #include "../libio/libioP.h" @@ -67,6 +68,10 @@ __libc_freeres (void) call_function_static_weak (__libc_dlerror_result_free); +#ifdef SHARED + GLRO (dl_libc_freeres) (); +#endif + for (p = symbol_set_first_element (__libc_freeres_ptrs); !symbol_set_end_p (__libc_freeres_ptrs, p); ++p) free (*p); diff --git a/sysdeps/generic/ldsodefs.h b/sysdeps/generic/ldsodefs.h index 1318c36dce..c26860430c 100644 --- a/sysdeps/generic/ldsodefs.h +++ b/sysdeps/generic/ldsodefs.h @@ -712,6 +712,10 @@ struct rtld_global_ro namespace. */ void (*_dl_error_free) (void *); void *(*_dl_tls_get_addr_soft) (struct link_map *); + + /* Called from __libc_shared to deallocate malloc'ed memory. */ + void (*_dl_libc_freeres) (void); + #ifdef HAVE_DL_DISCOVER_OSVERSION int (*_dl_discover_osversion) (void); #endif @@ -1416,6 +1420,9 @@ __rtld_mutex_init (void) } #endif /* !PTHREAD_IN_LIBC */ +/* Implementation of GL (dl_libc_freeres). */ +void __rtld_libc_freeres (void) attribute_hidden; + void __thread_gscope_wait (void) attribute_hidden; # define THREAD_GSCOPE_WAIT() __thread_gscope_wait () From patchwork Wed Nov 3 16:28:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 47008 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 84C913858404 for ; Wed, 3 Nov 2021 16:34:48 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 84C913858404 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1635957288; bh=7j245BVXMn3wgjejBr/8sk68nzR/dn4RSI2cgxyySck=; h=To:Subject:In-Reply-To:References:Date:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=rt7buRGF+HiyxplSYLw0d9UMaJ+R19v4ThjG3JOYcR8Z/VvmcyqmSPdJ4E35VW/ga WIeh/he/Wm1+bxygsCokrg8f2id+wEhFgvmBYlpXWssYPieQ0/OaAShSfOV03AuOR7 GoE6fkvnFQ0fWf9qcV6b2CL51kz/xeZaE4SwMixQ= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by sourceware.org (Postfix) with ESMTPS id 9D00A385843C for ; Wed, 3 Nov 2021 16:28:17 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 9D00A385843C Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-537-u17XMYPJNV-1x65ZoAPdmg-1; Wed, 03 Nov 2021 12:28:12 -0400 X-MC-Unique: u17XMYPJNV-1x65ZoAPdmg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4370910A8E00; Wed, 3 Nov 2021 16:28:11 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.39.192.3]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 0979B6FEED; Wed, 3 Nov 2021 16:28:06 +0000 (UTC) To: libc-alpha@sourceware.org Subject: [PATCH 3/3] elf: Add _dl_find_eh_frame function In-Reply-To: References: X-From-Line: ed580138ca8b4c467f84f1131cfe627729588212 Mon Sep 17 00:00:00 2001 Message-Id: Date: Wed, 03 Nov 2021 17:28:02 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/27.2 (gnu/linux) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-13.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_LOW, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP, URIBL_BLACK autolearn=unavailable autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Florian Weimer via Libc-alpha From: Florian Weimer Reply-To: Florian Weimer Cc: Jakub Jelinek , gcc-patches@gcc.gnu.org, Jason Merrill Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" This function is similar to __gnu_Unwind_Find_exidx as used on arm. It can be used to speed up the libgcc unwinder. --- NEWS | 4 + bits/dlfcn_eh_frame.h | 33 + dlfcn/Makefile | 2 +- dlfcn/dlfcn.h | 2 + elf/Makefile | 31 +- elf/Versions | 3 + elf/dl-close.c | 4 + elf/dl-find_eh_frame.c | 864 ++++++++++++++++++ elf/dl-find_eh_frame.h | 90 ++ elf/dl-find_eh_frame_slow.h | 55 ++ elf/dl-libc_freeres.c | 2 + elf/dl-open.c | 5 + elf/rtld.c | 7 + elf/tst-dl_find_eh_frame-mod1.c | 10 + elf/tst-dl_find_eh_frame-mod2.c | 10 + elf/tst-dl_find_eh_frame-mod3.c | 10 + elf/tst-dl_find_eh_frame-mod4.c | 10 + elf/tst-dl_find_eh_frame-mod5.c | 11 + elf/tst-dl_find_eh_frame-mod6.c | 11 + elf/tst-dl_find_eh_frame-mod7.c | 10 + elf/tst-dl_find_eh_frame-mod8.c | 10 + elf/tst-dl_find_eh_frame-mod9.c | 10 + elf/tst-dl_find_eh_frame-threads.c | 237 +++++ elf/tst-dl_find_eh_frame.c | 179 ++++ include/atomic_wide_counter.h | 14 + include/bits/dlfcn_eh_frame.h | 1 + include/link.h | 3 + manual/Makefile | 2 +- manual/dynlink.texi | 69 ++ manual/libdl.texi | 10 - manual/probes.texi | 2 +- manual/threads.texi | 2 +- sysdeps/i386/bits/dlfcn_eh_frame.h | 34 + sysdeps/mach/hurd/i386/ld.abilist | 1 + sysdeps/nios2/bits/dlfcn_eh_frame.h | 34 + sysdeps/unix/sysv/linux/aarch64/ld.abilist | 1 + sysdeps/unix/sysv/linux/alpha/ld.abilist | 1 + sysdeps/unix/sysv/linux/arc/ld.abilist | 1 + sysdeps/unix/sysv/linux/arm/be/ld.abilist | 1 + sysdeps/unix/sysv/linux/arm/le/ld.abilist | 1 + sysdeps/unix/sysv/linux/csky/ld.abilist | 1 + sysdeps/unix/sysv/linux/hppa/ld.abilist | 1 + sysdeps/unix/sysv/linux/i386/ld.abilist | 1 + sysdeps/unix/sysv/linux/ia64/ld.abilist | 1 + .../unix/sysv/linux/m68k/coldfire/ld.abilist | 1 + .../unix/sysv/linux/m68k/m680x0/ld.abilist | 1 + sysdeps/unix/sysv/linux/microblaze/ld.abilist | 1 + .../unix/sysv/linux/mips/mips32/ld.abilist | 1 + .../sysv/linux/mips/mips64/n32/ld.abilist | 1 + .../sysv/linux/mips/mips64/n64/ld.abilist | 1 + sysdeps/unix/sysv/linux/nios2/ld.abilist | 1 + .../sysv/linux/powerpc/powerpc32/ld.abilist | 1 + .../linux/powerpc/powerpc64/be/ld.abilist | 1 + .../linux/powerpc/powerpc64/le/ld.abilist | 1 + sysdeps/unix/sysv/linux/riscv/rv32/ld.abilist | 1 + sysdeps/unix/sysv/linux/riscv/rv64/ld.abilist | 1 + .../unix/sysv/linux/s390/s390-32/ld.abilist | 1 + .../unix/sysv/linux/s390/s390-64/ld.abilist | 1 + sysdeps/unix/sysv/linux/sh/be/ld.abilist | 1 + sysdeps/unix/sysv/linux/sh/le/ld.abilist | 1 + .../unix/sysv/linux/sparc/sparc32/ld.abilist | 1 + .../unix/sysv/linux/sparc/sparc64/ld.abilist | 1 + sysdeps/unix/sysv/linux/x86_64/64/ld.abilist | 1 + sysdeps/unix/sysv/linux/x86_64/x32/ld.abilist | 1 + 64 files changed, 1795 insertions(+), 16 deletions(-) create mode 100644 bits/dlfcn_eh_frame.h create mode 100644 elf/dl-find_eh_frame.c create mode 100644 elf/dl-find_eh_frame.h create mode 100644 elf/dl-find_eh_frame_slow.h create mode 100644 elf/tst-dl_find_eh_frame-mod1.c create mode 100644 elf/tst-dl_find_eh_frame-mod2.c create mode 100644 elf/tst-dl_find_eh_frame-mod3.c create mode 100644 elf/tst-dl_find_eh_frame-mod4.c create mode 100644 elf/tst-dl_find_eh_frame-mod5.c create mode 100644 elf/tst-dl_find_eh_frame-mod6.c create mode 100644 elf/tst-dl_find_eh_frame-mod7.c create mode 100644 elf/tst-dl_find_eh_frame-mod8.c create mode 100644 elf/tst-dl_find_eh_frame-mod9.c create mode 100644 elf/tst-dl_find_eh_frame-threads.c create mode 100644 elf/tst-dl_find_eh_frame.c create mode 100644 include/bits/dlfcn_eh_frame.h create mode 100644 manual/dynlink.texi delete mode 100644 manual/libdl.texi create mode 100644 sysdeps/i386/bits/dlfcn_eh_frame.h create mode 100644 sysdeps/nios2/bits/dlfcn_eh_frame.h diff --git a/NEWS b/NEWS index 82b7016aef..68c9c21458 100644 --- a/NEWS +++ b/NEWS @@ -64,6 +64,10 @@ Major new features: to be used by compilers for optimizing usage of 'memcmp' when its return value is only used for its boolean status. +* The function _dl_find_eh_frame has been added. In-process unwinders + can use it to efficiently locate unwinding information for a code + address. + Deprecated and removed features, and other changes affecting compatibility: * The r_version update in the debugger interface makes the glibc binary diff --git a/bits/dlfcn_eh_frame.h b/bits/dlfcn_eh_frame.h new file mode 100644 index 0000000000..fe4c6d6ad7 --- /dev/null +++ b/bits/dlfcn_eh_frame.h @@ -0,0 +1,33 @@ +/* System dependent definitions for find unwind information using ld.so. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _DLFCN_H +# error "Never use directly; include instead." +#endif + +/* This implementation does not use a DBASE pointer argument in + _dl_find_eh_frame. */ +#define DL_FIND_EH_FRAME_DBASE 0 + +__BEGIN_DECLS +/* If PC points into an object that has a PT_GNU_EH_FRAME segment, + return the pointer to the start of that segment in memory. If no + corresponding object exists or the object has no such segment, + returns NULL. */ +void *_dl_find_eh_frame (void *__pc) __THROW; +__END_DECLS diff --git a/dlfcn/Makefile b/dlfcn/Makefile index 6bbfbb8344..fd6e2a30c5 100644 --- a/dlfcn/Makefile +++ b/dlfcn/Makefile @@ -19,7 +19,7 @@ subdir := dlfcn include ../Makeconfig -headers := bits/dlfcn.h dlfcn.h +headers := bits/dlfcn.h bits/dlfcn_eh_frame.h dlfcn.h extra-libs := libdl libdl-routines := libdl-compat routines = \ diff --git a/dlfcn/dlfcn.h b/dlfcn/dlfcn.h index 4a3b870a48..d5355657c7 100644 --- a/dlfcn/dlfcn.h +++ b/dlfcn/dlfcn.h @@ -28,6 +28,8 @@ #ifdef __USE_GNU +#include + /* If the first argument of `dlsym' or `dlvsym' is set to RTLD_NEXT the run-time address of the symbol called NAME in the next shared object is returned. The "next" relation is defined by the order diff --git a/elf/Makefile b/elf/Makefile index 1c768bdf47..49e35dd4ce 100644 --- a/elf/Makefile +++ b/elf/Makefile @@ -36,7 +36,7 @@ dl-routines = $(addprefix dl-,load lookup object reloc deps \ exception sort-maps lookup-direct \ call-libc-early-init write \ thread_gscope_wait tls_init_tp \ - debug-symbols) + debug-symbols find_eh_frame) ifeq (yes,$(use-ldconfig)) dl-routines += dl-cache endif @@ -230,7 +230,8 @@ tests-internal += loadtest unload unload2 circleload1 \ neededtest neededtest2 neededtest3 neededtest4 \ tst-tls3 tst-tls6 tst-tls7 tst-tls8 tst-dlmopen2 \ tst-ptrguard1 tst-stackguard1 \ - tst-create_format1 tst-tls-surplus tst-dl-hwcaps_split + tst-create_format1 tst-tls-surplus tst-dl-hwcaps_split \ + tst-dl_find_eh_frame tst-dl_find_eh_frame-threads tests-container += tst-pldd tst-dlopen-tlsmodid-container \ tst-dlopen-self-container tst-preload-pthread-libc test-srcs = tst-pathopt @@ -365,6 +366,11 @@ modules-names = testobj1 testobj2 testobj3 testobj4 testobj5 testobj6 \ tst-tls20mod-bad tst-tls21mod tst-dlmopen-dlerror-mod \ tst-auxvalmod \ tst-dlmopen-gethostbyname-mod tst-ro-dynamic-mod \ + tst-dl_find_eh_frame-mod1 tst-dl_find_eh_frame-mod2 \ + tst-dl_find_eh_frame-mod3 tst-dl_find_eh_frame-mod4 \ + tst-dl_find_eh_frame-mod5 tst-dl_find_eh_frame-mod6 \ + tst-dl_find_eh_frame-mod7 tst-dl_find_eh_frame-mod8 \ + tst-dl_find_eh_frame-mod9 \ # Most modules build with _ISOMAC defined, but those filtered out # depend on internal headers. @@ -1957,3 +1963,24 @@ $(objpfx)tst-ro-dynamic-mod.so: $(objpfx)tst-ro-dynamic-mod.os \ $(LINK.o) -nostdlib -nostartfiles -shared -o $@ \ -Wl,--script=tst-ro-dynamic-mod.map \ $(objpfx)tst-ro-dynamic-mod.os + +$(objpfx)tst-dl_find_eh_frame.out: \ + $(objpfx)tst-dl_find_eh_frame-mod1.so $(objpfx)tst-dl_find_eh_frame-mod2.so +CFLAGS-tst-dl_find_eh_frame.c += -funwind-tables +CFLAGS-tst-dl_find_eh_frame-mod1.c += -funwind-tables +CFLAGS-tst-dl_find_eh_frame-mod2.c += -funwind-tables +LDFLAGS-tst-dl_find_eh_frame-mod2.so += -Wl,--enable-new-dtags,-z,nodelete +$(objpfx)tst-dl_find_eh_frame-threads: $(shared-thread-library) +$(objpfx)tst-dl_find_eh_frame-threads.out: \ + $(objpfx)tst-dl_find_eh_frame-mod1.so $(objpfx)tst-dl_find_eh_frame-mod2.so \ + $(objpfx)tst-dl_find_eh_frame-mod3.so $(objpfx)tst-dl_find_eh_frame-mod4.so \ + $(objpfx)tst-dl_find_eh_frame-mod5.so $(objpfx)tst-dl_find_eh_frame-mod6.so \ + $(objpfx)tst-dl_find_eh_frame-mod7.so $(objpfx)tst-dl_find_eh_frame-mod8.so \ + $(objpfx)tst-dl_find_eh_frame-mod9.so +CFLAGS-tst-dl_find_eh_frame-mod3.c += -funwind-tables +CFLAGS-tst-dl_find_eh_frame-mod4.c += -funwind-tables +CFLAGS-tst-dl_find_eh_frame-mod5.c += -funwind-tables +CFLAGS-tst-dl_find_eh_frame-mod6.c += -funwind-tables +CFLAGS-tst-dl_find_eh_frame-mod7.c += -funwind-tables +CFLAGS-tst-dl_find_eh_frame-mod8.c += -funwind-tables +CFLAGS-tst-dl_find_eh_frame-mod9.c += -funwind-tables diff --git a/elf/Versions b/elf/Versions index 775aab62af..770a082886 100644 --- a/elf/Versions +++ b/elf/Versions @@ -48,6 +48,9 @@ ld { # stack canary __stack_chk_guard; } + GLIBC_2.35 { + _dl_find_eh_frame; + } GLIBC_PRIVATE { # Those are in the dynamic linker, but used by libc.so. __libc_enable_secure; diff --git a/elf/dl-close.c b/elf/dl-close.c index 4f5cfcc1c3..1249e964ee 100644 --- a/elf/dl-close.c +++ b/elf/dl-close.c @@ -32,6 +32,7 @@ #include #include #include +#include #include @@ -718,6 +719,9 @@ _dl_close_worker (struct link_map *map, bool force) if (imap->l_next != NULL) imap->l_next->l_prev = imap->l_prev; + /* Update the data used by _dl_find_eh_frame. */ + _dl_find_eh_frame_dlclose (imap); + free (imap->l_versions); if (imap->l_origin != (char *) -1) free ((char *) imap->l_origin); diff --git a/elf/dl-find_eh_frame.c b/elf/dl-find_eh_frame.c new file mode 100644 index 0000000000..c7313c122d --- /dev/null +++ b/elf/dl-find_eh_frame.c @@ -0,0 +1,864 @@ +/* Locating DWARF unwind information using the dynamic loader. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include +#include +#include + +#include + +/* Data for the main executable. There is usually a large gap between + the main executable and initially loaded shared objects. Record + the main executable separately, to increase the chance that the + range for the non-closeable mappings below covers only the shared + objects (and not also the gap between main executable and shared + objects). */ +static uintptr_t _dl_eh_main_map_start attribute_relro; +static struct dl_eh_frame_info _dl_eh_main_info attribute_relro; + +/* Data for initally loaded shared objects that cannot be unlaoded. + The mapping base addresses are stored in address order in the + _dl_eh_nodelete_mappings_bases array (containing + _dl_eh_nodelete_mappings_size elements). The EH data for a base + address is stored in the parallel _dl_eh_nodelete_mappings_infos. + These arrays are not modified after initialization. */ +static uintptr_t _dl_eh_nodelete_mappings_end attribute_relro; +static size_t _dl_eh_nodelete_mappings_size attribute_relro; +static uintptr_t *_dl_eh_nodelete_mappings_bases attribute_relro; +static struct dl_eh_frame_info *_dl_eh_nodelete_mappings_infos + attribute_relro; + +/* Mappings created by dlopen can go away with dlclose, so a data + dynamic data structure with some synchronization is needed. + Individual segments are similar to the _dl_eh_nodelete_mappings + above (two sorted arrays of the same length for bases/infos). The + previous segment contains lower addresses and is at most half as + long. Checking the address of the base address of the first + element during a lookup can therefore approximate a binary search + over all segments, even though the data is not stored in one + contiguous array. + + During updates, the segments are overwritten in place, and a + software transactional memory construct (involving the + _dl_eh_loaded_mappings_version variable) is used to detect + concurrent modification and retry as necessary. The memory + allocations are never deallocated, but slots used for objects that + have been dlclose'd can be reused by dlopen. The memory can live + in the regular C malloc heap. + + The segments are populated from the start of the list, with the + mappings with the highest address. Only if this segment is full, + previous segments are used for mappings at lower addresses. The + remaining segments are populated as needed, but after resizing, + some of the initial segments (at the end of the linked list) can be + empty (with size 0). + + Adding new elements to this data structure is another source of + quadratic behavior for dlopen. If the other causes of quadratic + behavior are eliminated, a more complicated data structure will be + needed. */ +struct dl_eh_mappings_segment +{ + /* The previous segment has lower base addresses. */ + struct dl_eh_mappings_segment *previous; + + /* Used by __libc_freeres to deallocate malloc'ed memory. */ + void *to_free; + + /* Count of array elements in use and allocated. */ + size_t size; + size_t allocated; + uintptr_t bases[]; /* infos array follows. */ +}; + +/* To achieve async-signal-safety, two copies of the data structure + are used, so that a signal handler can still use this data even if + dlopen or dlclose modify the other copy. The the MSB in + _dl_eh_loaded_mappings_version determins which array element is the + currently active region. */ +static struct dl_eh_mappings_segment *_dl_eh_loaded_mappings[2]; + +/* Returns the co-allocated struct dl_eh_frame_info array inside + *SEG. */ +static inline struct dl_eh_frame_info * +_dl_eh_mappings_segment_infos (struct dl_eh_mappings_segment *seg) +{ + return (struct dl_eh_frame_info *) &seg->bases[seg->allocated]; +} + +/* Returns the number of actually used elements in all segements + starting at SEG. */ +static inline size_t +_dl_eh_mappings_segment_count_used (struct dl_eh_mappings_segment *seg) +{ + size_t count = 0; + for (; seg != NULL && seg->size > 0; seg = seg->previous) + { + struct dl_eh_frame_info *infos = _dl_eh_mappings_segment_infos (seg); + for (size_t i = 0; i < seg->size; ++i) + /* Exclude elements which have been dlclose'd. */ + count += infos[i].size > 0; + } + return count; +} + +/* Compute the total number of available allocated segments linked + from SEG. */ +static inline size_t +_dl_eh_mappings_segment_count_allocated (struct dl_eh_mappings_segment *seg) +{ + size_t count = 0; + for (; seg != NULL; seg = seg->previous) + count += seg->allocated; + return count; +} + +/* This is essentially an arbitrary value. dlopen allocates plenty of + memory anyway, so over-allocated a bit does not hurt. Not having + many small-ish segments helps to avoid many small binary searches. + Not using a power of 2 means that we do not waste an extra page + just for the malloc header if a mapped allocation is used in the + glibc allocator. */ +enum { dl_eh_mappings_initial_segment_size = 63 }; + +/* Allocate an empty segment. This used for the first ever + allocation. */ +static struct dl_eh_mappings_segment * +_dl_eh_mappings_segment_allocate_unpadded (size_t size) +{ + if (size < dl_eh_mappings_initial_segment_size) + size = dl_eh_mappings_initial_segment_size; + /* No overflow checks here because the size is a mapping count, and + struct link_map is larger than what we allocate here. */ + size_t to_allocate = (sizeof (struct dl_eh_mappings_segment) + + size * (sizeof (uintptr_t) + + sizeof (struct dl_eh_frame_info))); + struct dl_eh_mappings_segment *result = malloc (to_allocate); + if (result != NULL) + { + result->previous = NULL; + result->to_free = NULL; /* Minimal malloc memory cannot be freed. */ + result->size = 0; + result->allocated = size; + } + return result; +} + +/* Allocate an empty segment that is at least SIZE large. PREVIOUS */ +static struct dl_eh_mappings_segment * +_dl_eh_mappings_segment_allocate (size_t size, + struct dl_eh_mappings_segment * previous) +{ + /* Exponential sizing policies, so that lookup approximates a binary + search. */ + { + size_t minimum_growth; + if (previous == NULL) + minimum_growth = dl_eh_mappings_initial_segment_size; + else + minimum_growth = 2* previous->allocated; + if (size < minimum_growth) + size = minimum_growth; + } + enum { cache_line_size_estimate = 128 }; + /* No overflow checks here because the size is a mapping count, and + struct link_map is larger than what we allocate here. */ + size_t to_allocate = (sizeof (struct dl_eh_mappings_segment) + + size * (sizeof (uintptr_t) + + sizeof (struct dl_eh_frame_info)) + + 2 * cache_line_size_estimate); + char *ptr = malloc (to_allocate); + if (ptr == NULL) + return NULL; + char *original_ptr = ptr; + /* Start and end at a (conservative) 128-byte cache line boundary. + Do not use memalign for compatibility with partially interposing + malloc implementations. */ + char *end = PTR_ALIGN_DOWN (ptr + to_allocate, cache_line_size_estimate); + ptr = PTR_ALIGN_UP (ptr, cache_line_size_estimate); + struct dl_eh_mappings_segment *result + = (struct dl_eh_mappings_segment *) ptr; + result->previous = previous; + result->to_free = original_ptr; + result->size = 0; + /* We may have obtained slightly more space if malloc happened + to provide an over-aligned pointer. */ + result->allocated = (((uintptr_t) (end - ptr) + - sizeof (struct dl_eh_mappings_segment)) + / (sizeof (uintptr_t) + + sizeof (struct dl_eh_frame_info))); + assert (result->allocated >= size); + return result; +} + +/* Monotonic counter for software transactional memory. The lowest + bit indicates which element of the _dl_eh_loaded_mappings contains + up-to-date data. */ +static __atomic_wide_counter _dl_eh_loaded_mappings_version; + +/* TM version at the start of the read operation. */ +static inline uint64_t +_dl_eh_read_start_version (void) +{ + /* Acquire MO load synchronizes with the fences at the beginning and + end of the TM update region. */ + return __atomic_wide_counter_load_acquire (&_dl_eh_loaded_mappings_version); +} + +/* Optimized variant of _dl_eh_start_version which can be called when + the loader is write-locked. */ +static inline uint64_t +_dl_eh_read_version_locked (void) +{ + return __atomic_wide_counter_load_relaxed (&_dl_eh_loaded_mappings_version); +} + +/* Update the version to reflect that an update is happening. This + does not change the bit that controls the active segment chain. + Returns the index of the currently active segment chain. */ +static inline unsigned int +_dl_eh_mappings_begin_update (void) +{ + unsigned int v + = __atomic_wide_counter_fetch_add_relaxed (&_dl_eh_loaded_mappings_version, + 2); + /* Subsequent stores to the TM data must not be reordered before the + store above with the version update. */ + atomic_thread_fence_release (); + return v & 1; +} + +/* Installs the just-updated version as the active version. */ +static inline void +_dl_eh_mappings_end_update (void) +{ + /* The previous writes to the TM data must not be reordered after + the version update below. */ + atomic_thread_fence_release (); + __atomic_wide_counter_fetch_add_relaxed (&_dl_eh_loaded_mappings_version, + 1); +} +/* Completes an in-place update without switching versions. */ +static inline void +_dl_eh_mappings_end_update_no_switch (void) +{ + /* The previous writes to the TM data must not be reordered after + the version update below. */ + atomic_thread_fence_release (); + __atomic_wide_counter_fetch_add_relaxed (&_dl_eh_loaded_mappings_version, + 2); +} + +/* Return true if the read was successful, given the start + version. */ +static inline bool +_dl_eh_read_success (uint64_t start_version) +{ + return _dl_eh_read_start_version () == start_version; +} + +/* Returns the active segment identified by the specified start + version. */ +static struct dl_eh_mappings_segment * +_dl_eh_mappings_active_segment (uint64_t start_version) +{ + return _dl_eh_loaded_mappings[start_version & 1]; +} + +/* Searches PC amoung the sorted array [FIRST, FIRST + SIZE). Returns + the index of the first element that is not less than PC, or SIZE if + there is no such element. */ +static inline size_t +_dl_eh_find_lower_bound (uintptr_t pc, const uintptr_t *first1, + size_t size) +{ + const uintptr_t *first = first1; + while (size > 0) + { + size_t half = size >> 1; + const uintptr_t *middle = first + half; + if (*middle < pc) + { + first = middle + 1; + size -= half + 1; + } + else + size = half; + } + return first - first1; +} + +void * +_dl_find_eh_frame (void *pc1 +#if DL_FIND_EH_FRAME_DBASE + , void **dbase +#endif + ) +{ + uintptr_t pc = (uintptr_t) pc1; + + if (_dl_eh_main_info.size == 0) + { + /* Not initialized. No locking is needed here because this can + only be called from audit modules, which cannot create + threads. */ +#if DL_FIND_EH_FRAME_DBASE + return _dl_find_eh_frame_slow (pc1, dbase); +#else + return _dl_find_eh_frame_slow (pc1); +#endif + } + + /* Main executable. */ + if (pc >= _dl_eh_main_map_start + && (pc - _dl_eh_main_map_start) < _dl_eh_main_info.size) + { +#if DL_FIND_EH_FRAME_DBASE + *dbase = _dl_eh_main_info.dbase; +#endif + return _dl_eh_main_info.eh_frame; + } + + /* Other initially loaded objects. */ + if (pc >= *_dl_eh_nodelete_mappings_bases + && pc < _dl_eh_nodelete_mappings_end) + { + size_t idx = _dl_eh_find_lower_bound (pc, + _dl_eh_nodelete_mappings_bases, + _dl_eh_nodelete_mappings_size); + const struct dl_eh_frame_info *info + = _dl_eh_nodelete_mappings_infos + idx; + bool match; + if (idx < _dl_eh_nodelete_mappings_size + && pc == _dl_eh_nodelete_mappings_bases[idx]) + match = true; + else + { + /* PC might be in the previous mapping. */ + --idx; + --info; + match = pc - _dl_eh_nodelete_mappings_bases[idx] < info->size; + } + if (match) + { +#if DL_FIND_EH_FRAME_DBASE + *dbase = info->dbase; +#endif + return info->eh_frame; + } + /* Fall through to the full search. The kernel may have mapped + the initial mappings with gaps that are later filled by + dlopen with other mappings. */ + } + + /* Handle audit modules, dlopen, dlopen objects. This uses software + transactional memory, with a retry loop in case the version + changes during execution. */ + while (true) + { + retry: + ; + uint64_t start_version = _dl_eh_read_start_version (); + + /* The read through seg->previous assumes that the CPU + recognizes the load dependency, so that no invalid size + values is read. Furthermore, the code assumes that no + out-of-thin-air value for seg->size is observed. Together, + this ensures that the observed seg->size value is always less + than seg->allocated, so that _dl_eh_mappings_index does not + read out-of-bounds. (This avoids intermediate TM version + verification. A concurrent version update will lead to + invalid lookup results, but not to out-of-memory access.) + + Either seg == NULL or seg->size == 0 terminates the segment + list. _dl_eh_frame_frame_update does not bother to clear the + size on earlier unused segments. */ + for (struct dl_eh_mappings_segment *seg + = _dl_eh_mappings_active_segment (start_version); + seg != NULL && seg->size > 0; seg = seg->previous) + if (pc >= seg->bases[0]) + { + /* PC may lie within this segment. If it is less than the + segment start address, it can only lie in a previous + segment, due to the base address sorting. */ + size_t idx = _dl_eh_find_lower_bound (pc, seg->bases, seg->size); + const struct dl_eh_frame_info *info + = _dl_eh_mappings_segment_infos (seg) + idx; + bool match; + if (idx < seg->size && pc == seg->bases[idx]) + /* Check for dlcose. */ + match = info->size > 0; + else + { + /* The match, if any, must be in the previous mapping. */ + --idx; + --info; + match = pc - seg->bases[idx] < info->size; + } + + if (match) + { + /* Found the right mapping. Copy out the data prior to + checking if the read transaction was successful. */ + void *eh_frame_copy = info->eh_frame; +#if DL_FIND_EH_FRAME_DBASE + void *dbase_copy = info->dbase; +#endif + if (_dl_eh_read_success (start_version)) + { +#if DL_FIND_EH_FRAME_DBASE + *dbase = dbase_copy; +#endif + return eh_frame_copy; + } + else + /* Read transaction failure. */ + goto retry; + } + else + { + /* PC is not covered by this mapping. */ + if (_dl_eh_read_success (start_version)) + return NULL; + else + /* Read transaction failure. */ + goto retry; + } + } /* if: PC might lie within the current seg. */ + + /* PC is not covered by any segment. */ + if (_dl_eh_read_success (start_version)) + return NULL; + } /* Transaction retry loop. */ +} + +/* _dl_eh_process_initial is called twice. First to compute the array + sizes from the initial loaded mappings. Second to fill in the + bases and infos arrays with the (still unsorted) data. Returns the + number of loaded (non-nodelete) mappings. */ +static size_t +_dl_eh_process_initial (void) +{ + struct link_map *main_map = GL(dl_ns)[LM_ID_BASE]._ns_loaded; + + size_t nodelete = 0; + if (!main_map->l_contiguous) + { + struct dl_eh_frame_info info; + _dl_get_eh_frame (main_map, &info); + + /* PT_LOAD segments for a non-contiguous are added to the + non-closeable mappings. */ + for (const ElfW(Phdr) *ph = main_map->l_phdr, + *ph_end = main_map->l_phdr + main_map->l_phnum; + ph < ph_end; ++ph) + if (ph->p_type == PT_LOAD) + { + if (_dl_eh_nodelete_mappings_bases != NULL) + { + /* Second pass only. */ + _dl_eh_nodelete_mappings_bases[nodelete] + = ph->p_vaddr + main_map->l_addr; + _dl_eh_nodelete_mappings_infos[nodelete].size = ph->p_memsz; + _dl_eh_nodelete_mappings_infos[nodelete].eh_frame + = info.eh_frame; +#if DL_FIND_EH_FRAME_DBASE + _dl_eh_nodelete_mappings_infos[nodelete].dbase = info.dbase; +#endif + } + ++nodelete; + } + } + + size_t loaded = 0; + for (Lmid_t ns = 0; ns < GL(dl_nns); ++ns) + for (struct link_map *l = GL(dl_ns)[ns]._ns_loaded; l != NULL; + l = l->l_next) + /* Skip the main map processed above, and proxy maps. */ + if (l != main_map && l == l->l_real) + { + /* lt_library link maps are implicitly NODELETE. */ + if (l->l_type == lt_library || l->l_nodelete_active) + { + if (_dl_eh_nodelete_mappings_bases != NULL) + { + /* Second pass only. */ + _dl_eh_nodelete_mappings_bases[nodelete] = l->l_map_start; + _dl_get_eh_frame + (l, &_dl_eh_nodelete_mappings_infos[nodelete]); + } + ++nodelete; + } + else if (l->l_type == lt_loaded) + { + if (_dl_eh_loaded_mappings[0] != NULL) + { + /* Second pass only. */ + _dl_eh_loaded_mappings[0]->bases[loaded] = l->l_map_start; + _dl_get_eh_frame (l, (_dl_eh_mappings_segment_infos + (_dl_eh_loaded_mappings[0]) + + loaded)); + } + ++loaded; + } + } + + _dl_eh_nodelete_mappings_size = nodelete; + return loaded; +} + +/* Selection sort based on mapping base address. The BASES and INFOS + arrays are updated at the same time. */ +void +_dl_eh_sort_mappings (uintptr_t *bases, struct dl_eh_frame_info *infos, + size_t size) +{ + if (size < 2) + return; + + for (size_t i = 0; i < size - 1; ++i) + { + /* Find minimum. */ + size_t min_idx = i; + size_t min_val = bases[i]; + for (size_t j = i + 1; j < size; ++j) + if (bases[j] < min_val) + { + min_idx = j; + min_val = bases[j]; + } + + /* Swap into place. */ + bases[min_idx] = bases[i]; + bases[i] = min_val; + struct dl_eh_frame_info tmp = infos[min_idx]; + infos[min_idx] = infos[i]; + infos[i] = tmp; + } +} + +void +_dl_find_eh_frame_init (void) +{ + /* Cover the main mapping. */ + { + struct link_map *main_map = GL(dl_ns)[LM_ID_BASE]._ns_loaded; + + if (main_map->l_contiguous) + { + _dl_eh_main_map_start = main_map->l_map_start; + _dl_get_eh_frame (main_map, &_dl_eh_main_info); + } + else + { + /* Non-contiguous main maps are handled in + _dl_eh_process_initial. Mark as initialized, but not + coverying any valid PC. */ + _dl_eh_main_map_start = -1; + _dl_eh_main_info.size = 1; + } + } + + /* Allocate the data structures. */ + size_t loaded_size = _dl_eh_process_initial (); + _dl_eh_nodelete_mappings_bases + = malloc (_dl_eh_nodelete_mappings_size + * sizeof (*_dl_eh_nodelete_mappings_bases)); + _dl_eh_nodelete_mappings_infos + = malloc (_dl_eh_nodelete_mappings_size + * sizeof (*_dl_eh_nodelete_mappings_infos)); + if (loaded_size > 0) + _dl_eh_loaded_mappings[0] = (_dl_eh_mappings_segment_allocate_unpadded + (loaded_size)); + if (_dl_eh_nodelete_mappings_bases == NULL + || _dl_eh_nodelete_mappings_infos == NULL + || (loaded_size > 0 && _dl_eh_loaded_mappings[0] == NULL)) + _dl_fatal_printf ("\ +Fatal glibc error: cannot allocate memory for DWARF EH frame data\n"); + /* Fill in the data with the second call. */ + _dl_eh_nodelete_mappings_size = 0; + _dl_eh_process_initial (); + + /* Sort both arrays. */ + if (_dl_eh_nodelete_mappings_size > 0) + { + _dl_eh_sort_mappings (_dl_eh_nodelete_mappings_bases, + _dl_eh_nodelete_mappings_infos, + _dl_eh_nodelete_mappings_size); + size_t last_idx = _dl_eh_nodelete_mappings_size - 1; + _dl_eh_nodelete_mappings_end + = (_dl_eh_nodelete_mappings_bases[last_idx] + + _dl_eh_nodelete_mappings_infos[last_idx].size); + } + if (loaded_size > 0) + _dl_eh_sort_mappings + (_dl_eh_loaded_mappings[0]->bases, + _dl_eh_mappings_segment_infos (_dl_eh_loaded_mappings[0]), + _dl_eh_loaded_mappings[0]->size); +} + +static void +_dl_eh_frame_link_map_sort (struct link_map **loaded, size_t size) +{ + /* Selection sort based on map_start. */ + if (size < 2) + return; + for (size_t i = 0; i < size - 1; ++i) + { + /* Find minimum. */ + size_t min_idx = i; + ElfW(Addr) min_val = loaded[i]->l_map_start; + for (size_t j = i + 1; j < size; ++j) + if (loaded[j]->l_map_start < min_val) + { + min_idx = j; + min_val = loaded[j]->l_map_start; + } + + /* Swap into place. */ + struct link_map *tmp = loaded[min_idx]; + loaded[min_idx] = loaded[i]; + loaded[i] = tmp; + } +} + +/* Initializes the segment for writing. Returns the target write + index (plus 1) in this segment. The index is chosen so that a + partially filled segment still has data at index 0. */ +static inline size_t +_dl_eh_frame_update_init_seg (struct dl_eh_mappings_segment *seg, + size_t remaining_to_add) +{ + if (remaining_to_add < seg->allocated) + /* Partially filled segment. */ + seg->size = remaining_to_add; + else + seg->size = seg->allocated; + return seg->size; +} + +/* Invoked from _dl_eh_frame_frame after sorting. */ +static bool +_dl_find_eh_frame_update_1 (struct link_map **loaded, size_t count) +{ + int active_idx = _dl_eh_mappings_begin_update (); + + struct dl_eh_mappings_segment *current_seg + = _dl_eh_loaded_mappings[active_idx]; + size_t current_used = _dl_eh_mappings_segment_count_used (current_seg); + + struct dl_eh_mappings_segment *target_seg + = _dl_eh_loaded_mappings[!active_idx]; + size_t remaining_to_add = current_used + count; + + /* Ensure that the new segment chain has enough space. */ + { + size_t new_allocated + = _dl_eh_mappings_segment_count_allocated (target_seg); + if (new_allocated < remaining_to_add) + { + size_t more = remaining_to_add - new_allocated; + target_seg = _dl_eh_mappings_segment_allocate (more, target_seg); + if (target_seg == NULL) + /* Out of memory. */ + return false; + /* The barrier ensures that a concurrent TM read or fork does + not see a partially initialized segment. */ + atomic_store_release (&_dl_eh_loaded_mappings[!active_idx], target_seg); + } + } + size_t target_seg_index1 = _dl_eh_frame_update_init_seg (target_seg, + remaining_to_add); + + /* Merge the current_seg segment list with the loaded array into the + target_set. Merging occurs backwards, in decreasing l_map_start + order. */ + size_t loaded_index1 = count; + size_t current_seg_index1; + if (current_seg == NULL) + current_seg_index1 = 0; + else + current_seg_index1 = current_seg->size; + while (true) + { + if (current_seg_index1 == 0) + { + /* Switch to the previous segment. */ + if (current_seg != NULL) + current_seg = current_seg->previous; + if (current_seg != NULL) + { + current_seg_index1 = current_seg->size; + if (current_seg_index1 == 0) + /* No more data in previous segments. */ + current_seg = NULL; + } + } + + if (current_seg != NULL + && (_dl_eh_mappings_segment_infos (current_seg) + [current_seg_index1 - 1]).size == 0) + { + /* This mapping has been dlclose'd. Do not copy it. */ + --current_seg_index1; + continue; + } + + if (loaded_index1 == 0 && current_seg == NULL) + /* No more data in either source. */ + break; + + /* Make room for another mapping. */ + assert (remaining_to_add > 0); + if (target_seg_index1 == 0) + { + /* Switch segments and set the size of the segment. */ + target_seg = target_seg->previous; + target_seg_index1 = _dl_eh_frame_update_init_seg (target_seg, + remaining_to_add); + } + + /* Determine where to store the data. */ + uintptr_t *pbase = &target_seg->bases[target_seg_index1 - 1]; + struct dl_eh_frame_info *pinfo + = _dl_eh_mappings_segment_infos (target_seg) + target_seg_index1 - 1; + + if (loaded_index1 == 0 + || (current_seg != NULL + && (loaded[loaded_index1 - 1]->l_map_start + < current_seg->bases[current_seg_index1 - 1]))) + { + /* Prefer mapping in current_seg. */ + assert (current_seg_index1 > 0); + *pbase = current_seg->bases[current_seg_index1 - 1]; + *pinfo = (_dl_eh_mappings_segment_infos (current_seg) + [current_seg_index1 - 1]); + --current_seg_index1; + } + else + { + /* Prefer newly loaded linkmap. */ + assert (loaded_index1 > 0); + struct link_map *l = loaded[loaded_index1 - 1]; + *pbase = l->l_map_start; + _dl_get_eh_frame (l, pinfo); + l->l_eh_frame_processed = 1; + --loaded_index1; + } + + /* Consume space in target segment. */ + --target_seg_index1; + + --remaining_to_add; + } + + /* Everything has been added. */ + assert (remaining_to_add == 0); + + /* The segment must have been filled up to the beginning. */ + assert (target_seg_index1 == 0); + + /* Prevent searching further into unused segments. */ + if (target_seg->previous != NULL) + target_seg->previous->size = 0; + + _dl_eh_mappings_end_update (); + return true; +} + +bool +_dl_find_eh_frame_update (struct link_map *new_map) +{ + /* Copy the newly-loaded link maps into an array for sorting. */ + size_t count = 0; + for (struct link_map *l = new_map; l != NULL; l = l->l_next) + count += !l->l_eh_frame_processed; + struct link_map **map_array = malloc (count * sizeof (*map_array)); + if (map_array == NULL) + return false; + { + size_t i = 0; + for (struct link_map *l = new_map; l != NULL; l = l->l_next) + if (!l->l_eh_frame_processed) + map_array[i++] = l; + } + if (count == 0) + return true; + + _dl_eh_frame_link_map_sort (map_array, count); + bool ok = _dl_find_eh_frame_update_1 (map_array, count); + free (map_array); + return ok; +} + +void +_dl_find_eh_frame_dlclose (struct link_map *map) +{ + uint64_t start_version = _dl_eh_read_version_locked (); + uintptr_t map_start = map->l_map_start; + + + /* Directly patch the size information in the mapping to mark it as + unused. See the parallel lookup logic in _dl_find_eh_frame. Do + not check for previous dlclose at the same mapping address + because that cannot happen (there would have to be an + intermediate dlopen, which drops size-zero mappings). */ + for (struct dl_eh_mappings_segment *seg + = _dl_eh_mappings_active_segment (start_version); + seg != NULL && seg->size > 0; seg = seg->previous) + if (map_start >= seg->bases[0]) + { + size_t idx = _dl_eh_find_lower_bound (map_start, seg->bases, seg->size); + struct dl_eh_frame_info *info + = _dl_eh_mappings_segment_infos (seg) + idx; + if (idx == seg->size || map_start != seg->bases[idx]) + /* Ignore missing link maps because of potential shutdown + issues around __libc_freeres. */ + return; + + /* The update happens in-place, but given that we do not use + atomic accesses on the read side, update the version around + the update to trigger re-validation in concurrent + readers. */ + _dl_eh_mappings_begin_update (); + + /* Mark as closed. */ + info->size = 0; + + _dl_eh_mappings_end_update_no_switch (); + } +} + +void +_dl_find_eh_frame_freeres (void) +{ + for (int idx = 0; idx < 2; ++idx) + { + for (struct dl_eh_mappings_segment *seg = _dl_eh_loaded_mappings[idx]; + seg != NULL; ) + { + struct dl_eh_mappings_segment *previous = seg->previous; + free (seg->to_free); + seg = previous; + } + /* Stop searching in shared objects. */ + _dl_eh_loaded_mappings[idx] = 0; + } +} diff --git a/elf/dl-find_eh_frame.h b/elf/dl-find_eh_frame.h new file mode 100644 index 0000000000..4bde9b14db --- /dev/null +++ b/elf/dl-find_eh_frame.h @@ -0,0 +1,90 @@ +/* Declarations for finding DWARF EH frame information. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _DL_FIND_EH_FRAME_H +#define _DL_FIND_EH_FRAME_H + +#include +#include +#include +#include +#include + +/* Exception handling information for a mapping (shared object or load + segment). The base address is stored separately to increase cache + utilization. */ +struct dl_eh_frame_info +{ + /* Size of the mapping. dlclose sets this to zero without removing + the array element. */ + uintptr_t size; + void *eh_frame; /* Corresponding PT_GNU_EH_FRAME. */ +#if DL_FIND_EH_FRAME_DBASE + void *dbase; +#endif + + /* Note: During the initialization phase, size is used for the base + address and eh_frame for a pointer to the link map. + _dl_eh_mappings_segment_finish computes the final data for a + segment after sorting by base address. */ +}; + +/* Extract the exception handling data from a link map and writes it + to *INFO. If no such data is available INFO->eh_frame will be + NULL. */ +static void __attribute__ ((unused)) +_dl_get_eh_frame (const struct link_map *l, struct dl_eh_frame_info *info) +{ + info->size = l->l_map_end - l->l_map_start; +#if DL_FIND_EH_FRAME_DBASE + info->dbase = (void *) l->l_info[DT_PLTGOT]; +#endif + + for (const ElfW(Phdr) *ph = l->l_phdr, *ph_end = l->l_phdr + l->l_phnum; + ph < ph_end; ++ph) + if (ph->p_type == PT_GNU_EH_FRAME) + { + info->eh_frame = (void *) (ph->p_vaddr + l->l_addr); + return; + } + + /* Object has no PT_GNU_EH_FRAME. */ + info->eh_frame = NULL; +} + + +/* Called by the dynamic linker to set up the data structures for the + initially loaded objects. This creates a few persistent + allocations, so it should be called with the minimal malloc. */ +void _dl_find_eh_frame_init (void) attribute_hidden; + +/* Called by dlopen/dlmopen to add new objects to the DWARF EH frame + data structures. NEW_MAP is the dlopen'ed link map. Link maps on + the l_next list are added if l_eh_frame_processed is 0. Needs to + be protected by loader write lock. Returns true on success, false + on malloc failure. */ +bool _dl_find_eh_frame_update (struct link_map *new_map) attribute_hidden; + +/* Called by dlclose to remove the link map from the DWARF EH frame + data structures. Needs to be protected by loader write lock. */ +void _dl_find_eh_frame_dlclose (struct link_map *l) attribute_hidden; + +/* Called from __libc_freeres to deallocate malloc'ed memory. */ +void _dl_find_eh_frame_freeres (void) attribute_hidden; + +#endif /* _DL_FIND_EH_FRAME_H */ diff --git a/elf/dl-find_eh_frame_slow.h b/elf/dl-find_eh_frame_slow.h new file mode 100644 index 0000000000..355351ce28 --- /dev/null +++ b/elf/dl-find_eh_frame_slow.h @@ -0,0 +1,55 @@ +/* Locating DWARF unwind information using the dynamic loader. Slow version. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include +#include +#include + +#include + +/* This function is similar to _dl_find_eh_frame, but travers the link + maps directly. It is used from audit modules before + _dl_find_eh_frame_init has been called, and for testing. */ +static void * +_dl_find_eh_frame_slow (void *pc +#if DL_FIND_EH_FRAME_DBASE + , void **dbase +#endif + ) +{ + ElfW(Addr) addr = (ElfW(Addr)) pc; + for (Lmid_t ns = 0; ns < GL(dl_nns); ++ns) + for (struct link_map *l = GL(dl_ns)[ns]._ns_loaded; l != NULL; + l = l->l_next) + if (addr >= l->l_map_start && addr < l->l_map_end + && (l->l_contiguous || _dl_addr_inside_object (l, addr))) + { + assert (ns == l->l_ns); + struct dl_eh_frame_info info; + _dl_get_eh_frame (l, &info); +#if DL_FIND_EH_FRAME_DBASE + *dbase = info.dbase; +#endif + return info.eh_frame; + } + + /* Object not found. */ + return NULL; +} diff --git a/elf/dl-libc_freeres.c b/elf/dl-libc_freeres.c index 68f305a6f9..7822730e7d 100644 --- a/elf/dl-libc_freeres.c +++ b/elf/dl-libc_freeres.c @@ -17,8 +17,10 @@ . */ #include +#include void __rtld_libc_freeres (void) { + _dl_find_eh_frame_freeres (); } diff --git a/elf/dl-open.c b/elf/dl-open.c index 6ea5dd2457..c889cc5c54 100644 --- a/elf/dl-open.c +++ b/elf/dl-open.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include @@ -749,6 +750,10 @@ dl_open_worker_begin (void *a) objects. */ update_scopes (new); + if (!_dl_find_eh_frame_update (new)) + _dl_signal_error (ENOMEM, new->l_libname->name, NULL, + N_ ("cannot allocate EH frame data")); + /* FIXME: It is unclear whether the order here is correct. Shouldn't new objects be made available for binding (and thus execution) only after there TLS data has been set up fully? diff --git a/elf/rtld.c b/elf/rtld.c index 847141e21d..c379da29cc 100644 --- a/elf/rtld.c +++ b/elf/rtld.c @@ -50,6 +50,7 @@ #include #include #include +#include #include @@ -2331,6 +2332,9 @@ dl_main (const ElfW(Phdr) *phdr, rtld_timer_stop (&relocate_time, start); } + /* Set up the EH frame lookup structures. */ + _dl_find_eh_frame_init (); + /* The library defining malloc has already been relocated due to prelinking. Resolve the malloc symbols for the dynamic loader. */ @@ -2439,6 +2443,9 @@ dl_main (const ElfW(Phdr) *phdr, re-relocation, we might call a user-supplied function (e.g. calloc from _dl_relocate_object) that uses TLS data. */ + /* Set up the EH frame lookup structures. */ + _dl_find_eh_frame_init (); + /* The malloc implementation has been relocated, so resolving its symbols (and potentially calling IFUNC resolvers) is safe at this point. */ diff --git a/elf/tst-dl_find_eh_frame-mod1.c b/elf/tst-dl_find_eh_frame-mod1.c new file mode 100644 index 0000000000..d33ef56efd --- /dev/null +++ b/elf/tst-dl_find_eh_frame-mod1.c @@ -0,0 +1,10 @@ +char mod1_data; + +void +mod1_function (void (*f) (void)) +{ + /* Make sure this is not a tail call and unwind information is + therefore needed. */ + f (); + f (); +} diff --git a/elf/tst-dl_find_eh_frame-mod2.c b/elf/tst-dl_find_eh_frame-mod2.c new file mode 100644 index 0000000000..7feccf9f57 --- /dev/null +++ b/elf/tst-dl_find_eh_frame-mod2.c @@ -0,0 +1,10 @@ +char mod2_data; + +void +mod2_function (void (*f) (void)) +{ + /* Make sure this is not a tail call and unwind information is + therefore needed. */ + f (); + f (); +} diff --git a/elf/tst-dl_find_eh_frame-mod3.c b/elf/tst-dl_find_eh_frame-mod3.c new file mode 100644 index 0000000000..c1fc20ff9c --- /dev/null +++ b/elf/tst-dl_find_eh_frame-mod3.c @@ -0,0 +1,10 @@ +char mod3_data[4096]; + +void +mod3_function (void (*f) (void)) +{ + /* Make sure this is not a tail call and unwind information is + therefore needed. */ + f (); + f (); +} diff --git a/elf/tst-dl_find_eh_frame-mod4.c b/elf/tst-dl_find_eh_frame-mod4.c new file mode 100644 index 0000000000..27934e6011 --- /dev/null +++ b/elf/tst-dl_find_eh_frame-mod4.c @@ -0,0 +1,10 @@ +char mod4_data; + +void +mod4_function (void (*f) (void)) +{ + /* Make sure this is not a tail call and unwind information is + therefore needed. */ + f (); + f (); +} diff --git a/elf/tst-dl_find_eh_frame-mod5.c b/elf/tst-dl_find_eh_frame-mod5.c new file mode 100644 index 0000000000..3bdbda8ccd --- /dev/null +++ b/elf/tst-dl_find_eh_frame-mod5.c @@ -0,0 +1,11 @@ +/* Slightly larger to get different layouts. */ +char mod5_data[4096]; + +void +mod5_function (void (*f) (void)) +{ + /* Make sure this is not a tail call and unwind information is + therefore needed. */ + f (); + f (); +} diff --git a/elf/tst-dl_find_eh_frame-mod6.c b/elf/tst-dl_find_eh_frame-mod6.c new file mode 100644 index 0000000000..f78acffb9e --- /dev/null +++ b/elf/tst-dl_find_eh_frame-mod6.c @@ -0,0 +1,11 @@ +/* Large to get different layouts. */ +char mod6_data[4096]; + +void +mod6_function (void (*f) (void)) +{ + /* Make sure this is not a tail call and unwind information is + therefore needed. */ + f (); + f (); +} diff --git a/elf/tst-dl_find_eh_frame-mod7.c b/elf/tst-dl_find_eh_frame-mod7.c new file mode 100644 index 0000000000..71353880da --- /dev/null +++ b/elf/tst-dl_find_eh_frame-mod7.c @@ -0,0 +1,10 @@ +char mod7_data; + +void +mod7_function (void (*f) (void)) +{ + /* Make sure this is not a tail call and unwind information is + therefore needed. */ + f (); + f (); +} diff --git a/elf/tst-dl_find_eh_frame-mod8.c b/elf/tst-dl_find_eh_frame-mod8.c new file mode 100644 index 0000000000..41f8f1ea09 --- /dev/null +++ b/elf/tst-dl_find_eh_frame-mod8.c @@ -0,0 +1,10 @@ +char mod8_data; + +void +mod8_function (void (*f) (void)) +{ + /* Make sure this is not a tail call and unwind information is + therefore needed. */ + f (); + f (); +} diff --git a/elf/tst-dl_find_eh_frame-mod9.c b/elf/tst-dl_find_eh_frame-mod9.c new file mode 100644 index 0000000000..dc2e7a20cb --- /dev/null +++ b/elf/tst-dl_find_eh_frame-mod9.c @@ -0,0 +1,10 @@ +char mod9_data; + +void +mod9_function (void (*f) (void)) +{ + /* Make sure this is not a tail call and unwind information is + therefore needed. */ + f (); + f (); +} diff --git a/elf/tst-dl_find_eh_frame-threads.c b/elf/tst-dl_find_eh_frame-threads.c new file mode 100644 index 0000000000..355241a354 --- /dev/null +++ b/elf/tst-dl_find_eh_frame-threads.c @@ -0,0 +1,237 @@ +/* _dl_find_eh_frame test with parallelism. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +struct find_result +{ + void *eh_frame; +#if DL_FIND_EH_FRAME_DBASE + void *dbase; +#endif +}; + +/* _dl_find_eh_frame with uniform calling convetion. */ +static struct find_result +find (void *pc) +{ + struct find_result result; +#if DL_FIND_EH_FRAME_DBASE + result.eh_frame = _dl_find_eh_frame (pc, &result.dbase); +#else + result.eh_frame = _dl_find_eh_frame (pc); +#endif + return result; +} + +/* Returns the soname for the test object NUMBER. */ +static char * +soname (int number) +{ + return xasprintf ("tst-dl_find_eh_frame-mod%d.so", number); +} + +/* Returns the data symbol name for the test object NUMBER. */ +static char * +symbol (int number) +{ + return xasprintf ("mod%d_data", number); +} + +struct verify_data +{ + char *soname; + struct link_map *link_map; + void *address; /* Address in the shared object. */ + void *map_start; /* Minimum covered address. */ + void *map_end; /* Maximum covered address. */ + struct dl_eh_frame_info info; + pthread_t thr; +}; + +/* Compare _dl_find_eh_frame result with struct dl_eh_frame_info. */ +static void +check (struct find_result actual, struct verify_data *expected) +{ + if (actual.eh_frame != expected->info.eh_frame) + { + support_record_failure (); + printf ("%s: error: %s EH frame is %p, expected %p\n", + __FILE__, expected->soname, actual.eh_frame, + expected->info.eh_frame); + } + if (actual.eh_frame == NULL) + /* No result to check. */ + return; +#if DL_FIND_EH_FRAME_DBASE + if (actual.dbase != expected->info.dbase) + { + support_record_failure (); + printf ("%s: error: %s data base is %p, expected %p\n", + __FILE__, expected->soname, actual.dbase, expected->info.dbase); + } +#endif +} + +/* Request process termination after 3 seconds. */ +static bool exit_requested; +static void * +exit_thread (void *ignored) +{ + usleep (3 * 100 * 1000); + __atomic_store_n (&exit_requested, true, __ATOMIC_RELAXED); + return NULL; +} + +static void * +verify_thread (void *closure) +{ + struct verify_data *data = closure; + + while (!__atomic_load_n (&exit_requested, __ATOMIC_RELAXED)) + { + check (find (data->address), data); + check (find (data->map_start), data); + check (find (data->map_end), data); + } + + return NULL; +} + +/* Sets up the verification data, dlopen'ing shared object NUMBER, and + launches a verification thread. */ +static void +start_verify (int number, struct verify_data *data) +{ + data->soname = soname (number); + data->link_map = xdlopen (data->soname, RTLD_NOW); + _dl_get_eh_frame (data->link_map, &data->info); + char *sym = symbol (number); + data->address = xdlsym (data->link_map, sym); + data->map_start = (void *) data->link_map->l_map_start; + data->map_end = (void *) (data->link_map->l_map_end - 1); + free (sym); + data->thr = xpthread_create (NULL, verify_thread, data); +} + + +static int +do_test (void) +{ + struct verify_data data_mod2; + struct verify_data data_mod4; + struct verify_data data_mod7; + + /* Load the modules with gaps. */ + { + void *mod1 = xdlopen ("tst-dl_find_eh_frame-mod1.so", RTLD_NOW); + start_verify (2, &data_mod2); + void *mod3 = xdlopen ("tst-dl_find_eh_frame-mod3.so", RTLD_NOW); + start_verify (4, &data_mod4); + void *mod5 = xdlopen ("tst-dl_find_eh_frame-mod5.so", RTLD_NOW); + void *mod6 = xdlopen ("tst-dl_find_eh_frame-mod6.so", RTLD_NOW); + start_verify (7, &data_mod7); + xdlclose (mod6); + xdlclose (mod5); + xdlclose (mod3); + xdlclose (mod1); + } + + /* Objects that continuously opened and closed. */ + struct temp_object + { + char *soname; + char *symbol; + struct link_map *link_map; + void *address; + void *eh_frame; + } temp_objects[] = + { + { soname (1), symbol (1), }, + { soname (3), symbol (3), }, + { soname (5), symbol (5), }, + { soname (6), symbol (6), }, + { soname (8), symbol (8), }, + { soname (9), symbol (9), }, + }; + + pthread_t exit_thr = xpthread_create (NULL, exit_thread, NULL); + + struct drand48_data state; + srand48_r (1, &state); + while (!__atomic_load_n (&exit_requested, __ATOMIC_RELAXED)) + { + long int idx; + lrand48_r (&state, &idx); + idx %= array_length (temp_objects); + if (temp_objects[idx].link_map == NULL) + { + temp_objects[idx].link_map = xdlopen (temp_objects[idx].soname, + RTLD_NOW); + temp_objects[idx].address = xdlsym (temp_objects[idx].link_map, + temp_objects[idx].symbol); + temp_objects[idx].eh_frame + = find (temp_objects[idx].address).eh_frame; + } + else + { + xdlclose (temp_objects[idx].link_map); + temp_objects[idx].link_map = NULL; + void *eh_frame = find (temp_objects[idx].address).eh_frame; + if (eh_frame != NULL) + { + support_record_failure (); + printf ("%s: error: %s EH frame is %p after dclose, was %p\n", + __FILE__, temp_objects[idx].soname, eh_frame, + temp_objects[idx].eh_frame); + } + } + } + + xpthread_join (data_mod2.thr); + xpthread_join (data_mod4.thr); + xpthread_join (data_mod7.thr); + xpthread_join (exit_thr); + + for (size_t i = 0; i < array_length (temp_objects); ++i) + { + free (temp_objects[i].soname); + free (temp_objects[i].symbol); + if (temp_objects[i].link_map != NULL) + xdlclose (temp_objects[i].link_map); + } + + free (data_mod2.soname); + free (data_mod4.soname); + xdlclose (data_mod4.link_map); + free (data_mod7.soname); + xdlclose (data_mod7.link_map); + + return 0; +} + +#include diff --git a/elf/tst-dl_find_eh_frame.c b/elf/tst-dl_find_eh_frame.c new file mode 100644 index 0000000000..a532db6cda --- /dev/null +++ b/elf/tst-dl_find_eh_frame.c @@ -0,0 +1,179 @@ +/* Basic tests for _dl_find_eh_frame. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include +#include +#include +#include +#include +#include + +/* Use data objects for testing, so that it is not necessary to decode + function descriptors on architectures that have them. */ +static char main_program_data; + +struct find_result +{ + void *eh_frame; +#if DL_FIND_EH_FRAME_DBASE + void *dbase; +#endif +}; + +/* _dl_find_eh_frame with uniform calling convetion. */ +static struct find_result +find (void *pc) +{ + struct find_result result; +#if DL_FIND_EH_FRAME_DBASE + result.eh_frame = _dl_find_eh_frame (pc, &result.dbase); +#else + result.eh_frame = _dl_find_eh_frame (pc); +#endif + return result; +} + +/* Compare _dl_find_eh_frame result with struct dl_eh_frame_info. */ +static void +check (struct find_result actual, struct dl_eh_frame_info expected, int line) +{ + if (actual.eh_frame != expected.eh_frame) + { + support_record_failure (); + printf ("%s:%d: error: EH frame is %p, expected %p\n", + __FILE__, line, actual.eh_frame, expected.eh_frame); + } + if (actual.eh_frame == NULL) + /* No result to check. */ + return; +#if DL_FIND_EH_FRAME_DBASE + if (actual.dbase != expected.dbase) + { + support_record_failure (); + printf ("%s:%d: error: data base is %p, expected %p\n", + __FILE__, line, actual.dbase, expected.dbase); + } +#endif +} + +/* Check that unwind data for the main executable and the dynamic + linker can be found. */ +static void +check_initial (void) +{ + /* Avoid direct reference, which could lead to copy relocations. */ + struct r_debug *debug = xdlsym (NULL, "_r_debug"); + TEST_VERIFY_EXIT (debug != NULL); + char **tzname = xdlsym (NULL, "tzname"); + + /* The main executable has an unnamed link map. */ + struct link_map *main_map = (struct link_map *) debug->r_map; + TEST_COMPARE_STRING (main_map->l_name, ""); + + /* The link map of the dynamic linker. */ + struct link_map *rtld_map = xdlopen (LD_SO, RTLD_LAZY | RTLD_NOLOAD); + TEST_VERIFY_EXIT (rtld_map != NULL); + + /* The link map of libc.so. */ + struct link_map *libc_map = xdlopen (LIBC_SO, RTLD_LAZY | RTLD_NOLOAD); + TEST_VERIFY_EXIT (libc_map != NULL); + + struct dl_eh_frame_info expected; + + /* Data in the main program. */ + _dl_get_eh_frame (main_map, &expected); + check (find (&main_program_data), expected, __LINE__); + /* Corner cases for the mapping. */ + check (find ((void *) main_map->l_map_start), expected, __LINE__); + check (find ((void *) (main_map->l_map_end - 1)), expected, __LINE__); + + /* Data in the dynamic loader. */ + _dl_get_eh_frame (rtld_map, &expected); + check (find (debug), expected, __LINE__); + check (find ((void *) rtld_map->l_map_start), expected, __LINE__); + check (find ((void *) (rtld_map->l_map_end - 1)), expected, __LINE__); + + /* Data in libc. */ + _dl_get_eh_frame (libc_map, &expected); + check (find (tzname), expected, __LINE__); + check (find ((void *) libc_map->l_map_start), expected, __LINE__); + check (find ((void *) (libc_map->l_map_end - 1)), expected, __LINE__); +} + +static int +do_test (void) +{ + printf ("info: main program unwind data: %p\n", + find (&main_program_data).eh_frame); + + check_initial (); + + /* dlopen-based test. First an object that can be dlclosed. */ + struct link_map *mod1 = xdlopen ("tst-dl_find_eh_frame-mod1.so", RTLD_NOW); + void *mod1_data = xdlsym (mod1, "mod1_data"); + void *map_start = (void *) mod1->l_map_start; + void *map_end = (void *) (mod1->l_map_end - 1); + check_initial (); + + struct dl_eh_frame_info expected; + _dl_get_eh_frame (mod1, &expected); + check (find (mod1_data), expected, __LINE__); + check (find (map_start), expected, __LINE__); + check (find (map_end), expected, __LINE__); + + /* Unloading must make the unwinding data unavailable. */ + xdlclose (mod1); + check_initial (); + expected.eh_frame = NULL; + check (find (mod1_data), expected, __LINE__); + check (find (map_start), expected, __LINE__); + check (find (map_end), expected, __LINE__); + + /* Now try a NODELETE load. */ + struct link_map *mod2 = xdlopen ("tst-dl_find_eh_frame-mod2.so", RTLD_NOW); + void *mod2_data = xdlsym (mod1, "mod2_data"); + map_start = (void *) mod2->l_map_start; + map_end = (void *) (mod2->l_map_end - 1); + check_initial (); + _dl_get_eh_frame (mod2, &expected); + check (find (mod2_data), expected, __LINE__); + check (find (map_start), expected, __LINE__); + check (find (map_end), expected, __LINE__); + dlclose (mod2); /* Does nothing due to NODELETE. */ + check_initial (); + check (find (mod2_data), expected, __LINE__); + check (find (map_start), expected, __LINE__); + check (find (map_end), expected, __LINE__); + + /* Now load again the first module. */ + mod1 = xdlopen ("tst-dl_find_eh_frame-mod1.so", RTLD_NOW); + mod1_data = xdlsym (mod1, "mod1_data"); + map_start = (void *) mod1->l_map_start; + map_end = (void *) (mod1->l_map_end - 1); + check_initial (); + _dl_get_eh_frame (mod1, &expected); + check (find (mod1_data), expected, __LINE__); + check (find (map_start), expected, __LINE__); + check (find (map_end), expected, __LINE__); + + return 0; +} + +#include diff --git a/include/atomic_wide_counter.h b/include/atomic_wide_counter.h index 31f009d5e6..d1c40cd85f 100644 --- a/include/atomic_wide_counter.h +++ b/include/atomic_wide_counter.h @@ -30,6 +30,12 @@ __atomic_wide_counter_load_relaxed (__atomic_wide_counter *c) return atomic_load_relaxed (&c->__value64); } +static inline uint64_t +__atomic_wide_counter_load_acquire (__atomic_wide_counter *c) +{ + return atomic_load_acquire (&c->__value64); +} + static inline uint64_t __atomic_wide_counter_fetch_add_relaxed (__atomic_wide_counter *c, unsigned int val) @@ -64,6 +70,14 @@ __atomic_wide_counter_fetch_xor_release (__atomic_wide_counter *c, uint64_t __atomic_wide_counter_load_relaxed (__atomic_wide_counter *c) attribute_hidden; +static inline uint64_t +__atomic_wide_counter_load_acquire (__atomic_wide_counter *c) +{ + uint64_t r = __atomic_wide_counter_load_relaxed (c); + atomic_thread_fence_acquire (); + return r; +} + uint64_t __atomic_wide_counter_fetch_add_relaxed (__atomic_wide_counter *c, unsigned int op) attribute_hidden; diff --git a/include/bits/dlfcn_eh_frame.h b/include/bits/dlfcn_eh_frame.h new file mode 100644 index 0000000000..3f694c45bc --- /dev/null +++ b/include/bits/dlfcn_eh_frame.h @@ -0,0 +1 @@ +#include_next diff --git a/include/link.h b/include/link.h index c1c382ccfa..fa2ecc2f4a 100644 --- a/include/link.h +++ b/include/link.h @@ -211,6 +211,9 @@ struct link_map freed, ie. not allocated with the dummy malloc in ld.so. */ unsigned int l_ld_readonly:1; /* Nonzero if dynamic section is readonly. */ + unsigned int l_eh_frame_processed:1; /* Zero if _dl_eh_frame_update + needs to process this + lt_library map. */ /* NODELETE status of the map. Only valid for maps of type lt_loaded. Lazy binding sets l_nodelete_active directly, diff --git a/manual/Makefile b/manual/Makefile index e83444341e..31678681ef 100644 --- a/manual/Makefile +++ b/manual/Makefile @@ -39,7 +39,7 @@ chapters = $(addsuffix .texi, \ pipe socket terminal syslog math arith time \ resource setjmp signal startup process ipc job \ nss users sysinfo conf crypt debug threads \ - probes tunables) + dynlink probes tunables) appendices = lang.texi header.texi install.texi maint.texi platform.texi \ contrib.texi licenses = freemanuals.texi lgpl-2.1.texi fdl-1.3.texi diff --git a/manual/dynlink.texi b/manual/dynlink.texi new file mode 100644 index 0000000000..8969a8029d --- /dev/null +++ b/manual/dynlink.texi @@ -0,0 +1,69 @@ +@node Dynamic Linker +@c @node Dynamic Linker, Internal Probes, Threads, Top +@c %MENU% Loading programs and shared objects. +@chapter Dynamic Linker +@cindex dynamic linker +@cindex dynamic loader + +The @dfn{dynamic linker} is responsible for loading dynamically linked +programs and their dependencies (in the form of shared objects). The +dynamic linker in @theglibc{} also supports loading shared objects (such +as plugins) later at run time. + +Dynamic linkers are sometimes called @dfn{dynamic loaders}. + +@menu +* Dynamic Linker Introspection:: Interfaces for querying mapping information. +@end menu + +@node Dynamic Linker Introspection +@section Dynamic Linker Introspection + +@Theglibc{} provides various functions for querying information from the +dynamic linker. + +@deftypefun {void *} _dl_find_eh_frame (void *@var{pc}) +@standards{GNU, dlfcn.h} +@safety{@mtsafe{}@assafe{}@acsafe{}} +This function returns a pointer to the unwinding information for the +object that contains the program coder @var{pc}. If the platform uses +DWARF unwinding information, this is the in-memory address of the +@code{PT_GNU_EH_FRAME} segment. + +In case @var{pc} resides in an object that lacks unwinding information, +the function returns @code{NULL}. If no object matches @var{pc}, +@code{NULL} is returned as well. + +@code{_dl_find_eh_frame} itself is thread-safe. However, if the +application invokes @code{dlclose} for the object that contains @var{pc} +concurrently with @code{_dl_find_eh_frame} or after the call returns, +accessing the unwinding data for that object is not safe. Therefore, +the application needs to ensure by other means (e.g., by convention) +that @var{pc} remains a valid code address while the unwinding +information is processed. + +This function is a GNU extension. +@end deftypefun + +@deftypevr Macro int DL_FIND_EH_FRAME_DBASE +@standards{GNU, dlfcn.h} +On most targets, this macro is defined as @code{0}. If it is defined to +@code{1}, the @code{_dl_find_eh_frame} function expects a second +argument of type @code{void **}. In this case, a pointer to a +@code{void *} object must be passed, and if @code{_dl_find_eh_frame} +finds any unwinding information, it writes the base address for +@code{DW_EH_PE_datarel} DWARF encodings to this location. + +This macro is a GNU extension. +@end deftypevr + +@c FIXME these are undocumented: +@c dladdr +@c dladdr1 +@c dlclose +@c dlerror +@c dlinfo +@c dlmopen +@c dlopen +@c dlsym +@c dlvsym diff --git a/manual/libdl.texi b/manual/libdl.texi deleted file mode 100644 index e3fe0452d9..0000000000 --- a/manual/libdl.texi +++ /dev/null @@ -1,10 +0,0 @@ -@c FIXME these are undocumented: -@c dladdr -@c dladdr1 -@c dlclose -@c dlerror -@c dlinfo -@c dlmopen -@c dlopen -@c dlsym -@c dlvsym diff --git a/manual/probes.texi b/manual/probes.texi index 4aae76b819..ee019e6517 100644 --- a/manual/probes.texi +++ b/manual/probes.texi @@ -1,5 +1,5 @@ @node Internal Probes -@c @node Internal Probes, Tunables, Threads, Top +@c @node Internal Probes, Tunables, Dynamic Linker, Top @c %MENU% Probes to monitor libc internal behavior @chapter Internal probes diff --git a/manual/threads.texi b/manual/threads.texi index 06b6b277a1..7f166bfa87 100644 --- a/manual/threads.texi +++ b/manual/threads.texi @@ -1,5 +1,5 @@ @node Threads -@c @node Threads, Internal Probes, Debugging Support, Top +@c @node Threads, Dynamic Linker, Debugging Support, Top @c %MENU% Functions, constants, and data types for working with threads @chapter Threads @cindex threads diff --git a/sysdeps/i386/bits/dlfcn_eh_frame.h b/sysdeps/i386/bits/dlfcn_eh_frame.h new file mode 100644 index 0000000000..98f6b37029 --- /dev/null +++ b/sysdeps/i386/bits/dlfcn_eh_frame.h @@ -0,0 +1,34 @@ +/* i386 definitions for find unwind information using ld.so. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _DLFCN_H +# error "Never use directly; include instead." +#endif + +/* This implementation uses a DBASE pointer argument in + _dl_find_eh_frame. */ +#define DL_FIND_EH_FRAME_DBASE 1 + +__BEGIN_DECLS +/* If PC points into an object that has a PT_GNU_EH_FRAME segment, + return the pointer to the start of that segment in memory, and + *DBASE is updated with the base address for DW_EH_PE_datarel DWARF + encodings. If no corresponding object exists or the object has no + such segment, returns NULL. */ +void *_dl_find_eh_frame (void *__pc, void **__dbase) __THROW __nonnull ((2)); +__END_DECLS diff --git a/sysdeps/mach/hurd/i386/ld.abilist b/sysdeps/mach/hurd/i386/ld.abilist index 7e20c5e7ce..786cd93810 100644 --- a/sysdeps/mach/hurd/i386/ld.abilist +++ b/sysdeps/mach/hurd/i386/ld.abilist @@ -16,3 +16,4 @@ GLIBC_2.2.6 _r_debug D 0x14 GLIBC_2.2.6 abort F GLIBC_2.3 ___tls_get_addr F GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/nios2/bits/dlfcn_eh_frame.h b/sysdeps/nios2/bits/dlfcn_eh_frame.h new file mode 100644 index 0000000000..3acc741c6e --- /dev/null +++ b/sysdeps/nios2/bits/dlfcn_eh_frame.h @@ -0,0 +1,34 @@ +/* nios2 definitions for find unwind information using ld.so. + Copyright (C) 2021 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef _DLFCN_H +# error "Never use directly; include instead." +#endif + +/* This implementation uses a DBASE pointer argument in + _dl_find_eh_frame. */ +#define DL_FIND_EH_FRAME_DBASE 1 + +__BEGIN_DECLS +/* If PC points into an object that has a PT_GNU_EH_FRAME segment, + return the pointer to the start of that segment in memory, and + *DBASE is updated with the base address for DW_EH_PE_datarel DWARF + encodings. If no corresponding object exists or the object has no + such segment, returns NULL. */ +void *_dl_find_eh_frame (void *__pc, void **__dbase) __THROW __nonnull ((2)); +__END_DECLS diff --git a/sysdeps/unix/sysv/linux/aarch64/ld.abilist b/sysdeps/unix/sysv/linux/aarch64/ld.abilist index 80b2fe6725..4655d8d00f 100644 --- a/sysdeps/unix/sysv/linux/aarch64/ld.abilist +++ b/sysdeps/unix/sysv/linux/aarch64/ld.abilist @@ -3,3 +3,4 @@ GLIBC_2.17 __stack_chk_guard D 0x8 GLIBC_2.17 __tls_get_addr F GLIBC_2.17 _dl_mcount F GLIBC_2.17 _r_debug D 0x28 +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/alpha/ld.abilist b/sysdeps/unix/sysv/linux/alpha/ld.abilist index 98a03f611f..21f873600a 100644 --- a/sysdeps/unix/sysv/linux/alpha/ld.abilist +++ b/sysdeps/unix/sysv/linux/alpha/ld.abilist @@ -2,4 +2,5 @@ GLIBC_2.0 _r_debug D 0x28 GLIBC_2.1 __libc_stack_end D 0x8 GLIBC_2.1 _dl_mcount F GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F GLIBC_2.4 __stack_chk_guard D 0x8 diff --git a/sysdeps/unix/sysv/linux/arc/ld.abilist b/sysdeps/unix/sysv/linux/arc/ld.abilist index 048f17c848..b1b719ca61 100644 --- a/sysdeps/unix/sysv/linux/arc/ld.abilist +++ b/sysdeps/unix/sysv/linux/arc/ld.abilist @@ -3,3 +3,4 @@ GLIBC_2.32 __stack_chk_guard D 0x4 GLIBC_2.32 __tls_get_addr F GLIBC_2.32 _dl_mcount F GLIBC_2.32 _r_debug D 0x14 +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/arm/be/ld.abilist b/sysdeps/unix/sysv/linux/arm/be/ld.abilist index cc8825c3bc..973f414c35 100644 --- a/sysdeps/unix/sysv/linux/arm/be/ld.abilist +++ b/sysdeps/unix/sysv/linux/arm/be/ld.abilist @@ -1,3 +1,4 @@ +GLIBC_2.35 _dl_find_eh_frame F GLIBC_2.4 __libc_stack_end D 0x4 GLIBC_2.4 __stack_chk_guard D 0x4 GLIBC_2.4 __tls_get_addr F diff --git a/sysdeps/unix/sysv/linux/arm/le/ld.abilist b/sysdeps/unix/sysv/linux/arm/le/ld.abilist index cc8825c3bc..973f414c35 100644 --- a/sysdeps/unix/sysv/linux/arm/le/ld.abilist +++ b/sysdeps/unix/sysv/linux/arm/le/ld.abilist @@ -1,3 +1,4 @@ +GLIBC_2.35 _dl_find_eh_frame F GLIBC_2.4 __libc_stack_end D 0x4 GLIBC_2.4 __stack_chk_guard D 0x4 GLIBC_2.4 __tls_get_addr F diff --git a/sysdeps/unix/sysv/linux/csky/ld.abilist b/sysdeps/unix/sysv/linux/csky/ld.abilist index 564ac09737..bba19877b0 100644 --- a/sysdeps/unix/sysv/linux/csky/ld.abilist +++ b/sysdeps/unix/sysv/linux/csky/ld.abilist @@ -3,3 +3,4 @@ GLIBC_2.29 __stack_chk_guard D 0x4 GLIBC_2.29 __tls_get_addr F GLIBC_2.29 _dl_mcount F GLIBC_2.29 _r_debug D 0x14 +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/hppa/ld.abilist b/sysdeps/unix/sysv/linux/hppa/ld.abilist index d155a59843..dcee0ece2a 100644 --- a/sysdeps/unix/sysv/linux/hppa/ld.abilist +++ b/sysdeps/unix/sysv/linux/hppa/ld.abilist @@ -2,4 +2,5 @@ GLIBC_2.2 __libc_stack_end D 0x4 GLIBC_2.2 _dl_mcount F GLIBC_2.2 _r_debug D 0x14 GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F GLIBC_2.4 __stack_chk_guard D 0x4 diff --git a/sysdeps/unix/sysv/linux/i386/ld.abilist b/sysdeps/unix/sysv/linux/i386/ld.abilist index 0478e22071..0c4c02d18e 100644 --- a/sysdeps/unix/sysv/linux/i386/ld.abilist +++ b/sysdeps/unix/sysv/linux/i386/ld.abilist @@ -3,3 +3,4 @@ GLIBC_2.1 __libc_stack_end D 0x4 GLIBC_2.1 _dl_mcount F GLIBC_2.3 ___tls_get_addr F GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/ia64/ld.abilist b/sysdeps/unix/sysv/linux/ia64/ld.abilist index 33f91199bf..6ae0cb97b4 100644 --- a/sysdeps/unix/sysv/linux/ia64/ld.abilist +++ b/sysdeps/unix/sysv/linux/ia64/ld.abilist @@ -2,3 +2,4 @@ GLIBC_2.2 __libc_stack_end D 0x8 GLIBC_2.2 _dl_mcount F GLIBC_2.2 _r_debug D 0x28 GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/m68k/coldfire/ld.abilist b/sysdeps/unix/sysv/linux/m68k/coldfire/ld.abilist index cc8825c3bc..973f414c35 100644 --- a/sysdeps/unix/sysv/linux/m68k/coldfire/ld.abilist +++ b/sysdeps/unix/sysv/linux/m68k/coldfire/ld.abilist @@ -1,3 +1,4 @@ +GLIBC_2.35 _dl_find_eh_frame F GLIBC_2.4 __libc_stack_end D 0x4 GLIBC_2.4 __stack_chk_guard D 0x4 GLIBC_2.4 __tls_get_addr F diff --git a/sysdeps/unix/sysv/linux/m68k/m680x0/ld.abilist b/sysdeps/unix/sysv/linux/m68k/m680x0/ld.abilist index 3ba474c27f..1719c1bff0 100644 --- a/sysdeps/unix/sysv/linux/m68k/m680x0/ld.abilist +++ b/sysdeps/unix/sysv/linux/m68k/m680x0/ld.abilist @@ -2,4 +2,5 @@ GLIBC_2.0 _r_debug D 0x14 GLIBC_2.1 __libc_stack_end D 0x4 GLIBC_2.1 _dl_mcount F GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F GLIBC_2.4 __stack_chk_guard D 0x4 diff --git a/sysdeps/unix/sysv/linux/microblaze/ld.abilist b/sysdeps/unix/sysv/linux/microblaze/ld.abilist index a4933c3541..b915864bd6 100644 --- a/sysdeps/unix/sysv/linux/microblaze/ld.abilist +++ b/sysdeps/unix/sysv/linux/microblaze/ld.abilist @@ -3,3 +3,4 @@ GLIBC_2.18 __stack_chk_guard D 0x4 GLIBC_2.18 __tls_get_addr F GLIBC_2.18 _dl_mcount F GLIBC_2.18 _r_debug D 0x14 +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/mips/mips32/ld.abilist b/sysdeps/unix/sysv/linux/mips/mips32/ld.abilist index be09641a48..6f85418bc8 100644 --- a/sysdeps/unix/sysv/linux/mips/mips32/ld.abilist +++ b/sysdeps/unix/sysv/linux/mips/mips32/ld.abilist @@ -2,4 +2,5 @@ GLIBC_2.0 _r_debug D 0x14 GLIBC_2.2 __libc_stack_end D 0x4 GLIBC_2.2 _dl_mcount F GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F GLIBC_2.4 __stack_chk_guard D 0x4 diff --git a/sysdeps/unix/sysv/linux/mips/mips64/n32/ld.abilist b/sysdeps/unix/sysv/linux/mips/mips64/n32/ld.abilist index be09641a48..6f85418bc8 100644 --- a/sysdeps/unix/sysv/linux/mips/mips64/n32/ld.abilist +++ b/sysdeps/unix/sysv/linux/mips/mips64/n32/ld.abilist @@ -2,4 +2,5 @@ GLIBC_2.0 _r_debug D 0x14 GLIBC_2.2 __libc_stack_end D 0x4 GLIBC_2.2 _dl_mcount F GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F GLIBC_2.4 __stack_chk_guard D 0x4 diff --git a/sysdeps/unix/sysv/linux/mips/mips64/n64/ld.abilist b/sysdeps/unix/sysv/linux/mips/mips64/n64/ld.abilist index 1ea36e13f2..b2621aad3b 100644 --- a/sysdeps/unix/sysv/linux/mips/mips64/n64/ld.abilist +++ b/sysdeps/unix/sysv/linux/mips/mips64/n64/ld.abilist @@ -2,4 +2,5 @@ GLIBC_2.0 _r_debug D 0x28 GLIBC_2.2 __libc_stack_end D 0x8 GLIBC_2.2 _dl_mcount F GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F GLIBC_2.4 __stack_chk_guard D 0x8 diff --git a/sysdeps/unix/sysv/linux/nios2/ld.abilist b/sysdeps/unix/sysv/linux/nios2/ld.abilist index 52178802dd..7a8bd00445 100644 --- a/sysdeps/unix/sysv/linux/nios2/ld.abilist +++ b/sysdeps/unix/sysv/linux/nios2/ld.abilist @@ -3,3 +3,4 @@ GLIBC_2.21 __stack_chk_guard D 0x4 GLIBC_2.21 __tls_get_addr F GLIBC_2.21 _dl_mcount F GLIBC_2.21 _r_debug D 0x14 +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc32/ld.abilist b/sysdeps/unix/sysv/linux/powerpc/powerpc32/ld.abilist index 4bbfba7a61..f3a533cbdb 100644 --- a/sysdeps/unix/sysv/linux/powerpc/powerpc32/ld.abilist +++ b/sysdeps/unix/sysv/linux/powerpc/powerpc32/ld.abilist @@ -4,3 +4,4 @@ GLIBC_2.1 _dl_mcount F GLIBC_2.22 __tls_get_addr_opt F GLIBC_2.23 __parse_hwcap_and_convert_at_platform F GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc64/be/ld.abilist b/sysdeps/unix/sysv/linux/powerpc/powerpc64/be/ld.abilist index 283fb4510b..63ab18b70f 100644 --- a/sysdeps/unix/sysv/linux/powerpc/powerpc64/be/ld.abilist +++ b/sysdeps/unix/sysv/linux/powerpc/powerpc64/be/ld.abilist @@ -4,3 +4,4 @@ GLIBC_2.3 __libc_stack_end D 0x8 GLIBC_2.3 __tls_get_addr F GLIBC_2.3 _dl_mcount F GLIBC_2.3 _r_debug D 0x28 +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc64/le/ld.abilist b/sysdeps/unix/sysv/linux/powerpc/powerpc64/le/ld.abilist index b1f313c7cd..1fec480d9d 100644 --- a/sysdeps/unix/sysv/linux/powerpc/powerpc64/le/ld.abilist +++ b/sysdeps/unix/sysv/linux/powerpc/powerpc64/le/ld.abilist @@ -4,3 +4,4 @@ GLIBC_2.17 _dl_mcount F GLIBC_2.17 _r_debug D 0x28 GLIBC_2.22 __tls_get_addr_opt F GLIBC_2.23 __parse_hwcap_and_convert_at_platform F +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/riscv/rv32/ld.abilist b/sysdeps/unix/sysv/linux/riscv/rv32/ld.abilist index 94ca64c43d..7cec190630 100644 --- a/sysdeps/unix/sysv/linux/riscv/rv32/ld.abilist +++ b/sysdeps/unix/sysv/linux/riscv/rv32/ld.abilist @@ -3,3 +3,4 @@ GLIBC_2.33 __stack_chk_guard D 0x4 GLIBC_2.33 __tls_get_addr F GLIBC_2.33 _dl_mcount F GLIBC_2.33 _r_debug D 0x14 +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/riscv/rv64/ld.abilist b/sysdeps/unix/sysv/linux/riscv/rv64/ld.abilist index 845f356c3c..81795b588d 100644 --- a/sysdeps/unix/sysv/linux/riscv/rv64/ld.abilist +++ b/sysdeps/unix/sysv/linux/riscv/rv64/ld.abilist @@ -3,3 +3,4 @@ GLIBC_2.27 __stack_chk_guard D 0x8 GLIBC_2.27 __tls_get_addr F GLIBC_2.27 _dl_mcount F GLIBC_2.27 _r_debug D 0x28 +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/s390/s390-32/ld.abilist b/sysdeps/unix/sysv/linux/s390/s390-32/ld.abilist index b56f005beb..34d9165bdb 100644 --- a/sysdeps/unix/sysv/linux/s390/s390-32/ld.abilist +++ b/sysdeps/unix/sysv/linux/s390/s390-32/ld.abilist @@ -2,3 +2,4 @@ GLIBC_2.0 _r_debug D 0x14 GLIBC_2.1 __libc_stack_end D 0x4 GLIBC_2.1 _dl_mcount F GLIBC_2.3 __tls_get_offset F +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/s390/s390-64/ld.abilist b/sysdeps/unix/sysv/linux/s390/s390-64/ld.abilist index 6f788a086d..1175537ca3 100644 --- a/sysdeps/unix/sysv/linux/s390/s390-64/ld.abilist +++ b/sysdeps/unix/sysv/linux/s390/s390-64/ld.abilist @@ -2,3 +2,4 @@ GLIBC_2.2 __libc_stack_end D 0x8 GLIBC_2.2 _dl_mcount F GLIBC_2.2 _r_debug D 0x28 GLIBC_2.3 __tls_get_offset F +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/sh/be/ld.abilist b/sysdeps/unix/sysv/linux/sh/be/ld.abilist index d155a59843..dcee0ece2a 100644 --- a/sysdeps/unix/sysv/linux/sh/be/ld.abilist +++ b/sysdeps/unix/sysv/linux/sh/be/ld.abilist @@ -2,4 +2,5 @@ GLIBC_2.2 __libc_stack_end D 0x4 GLIBC_2.2 _dl_mcount F GLIBC_2.2 _r_debug D 0x14 GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F GLIBC_2.4 __stack_chk_guard D 0x4 diff --git a/sysdeps/unix/sysv/linux/sh/le/ld.abilist b/sysdeps/unix/sysv/linux/sh/le/ld.abilist index d155a59843..dcee0ece2a 100644 --- a/sysdeps/unix/sysv/linux/sh/le/ld.abilist +++ b/sysdeps/unix/sysv/linux/sh/le/ld.abilist @@ -2,4 +2,5 @@ GLIBC_2.2 __libc_stack_end D 0x4 GLIBC_2.2 _dl_mcount F GLIBC_2.2 _r_debug D 0x14 GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F GLIBC_2.4 __stack_chk_guard D 0x4 diff --git a/sysdeps/unix/sysv/linux/sparc/sparc32/ld.abilist b/sysdeps/unix/sysv/linux/sparc/sparc32/ld.abilist index 0c6610e3c2..21bd7308c0 100644 --- a/sysdeps/unix/sysv/linux/sparc/sparc32/ld.abilist +++ b/sysdeps/unix/sysv/linux/sparc/sparc32/ld.abilist @@ -2,3 +2,4 @@ GLIBC_2.0 _r_debug D 0x14 GLIBC_2.1 __libc_stack_end D 0x4 GLIBC_2.1 _dl_mcount F GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/sparc/sparc64/ld.abilist b/sysdeps/unix/sysv/linux/sparc/sparc64/ld.abilist index 33f91199bf..6ae0cb97b4 100644 --- a/sysdeps/unix/sysv/linux/sparc/sparc64/ld.abilist +++ b/sysdeps/unix/sysv/linux/sparc/sparc64/ld.abilist @@ -2,3 +2,4 @@ GLIBC_2.2 __libc_stack_end D 0x8 GLIBC_2.2 _dl_mcount F GLIBC_2.2 _r_debug D 0x28 GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/x86_64/64/ld.abilist b/sysdeps/unix/sysv/linux/x86_64/64/ld.abilist index d3cdf7611e..8a8a9e4bb3 100644 --- a/sysdeps/unix/sysv/linux/x86_64/64/ld.abilist +++ b/sysdeps/unix/sysv/linux/x86_64/64/ld.abilist @@ -2,3 +2,4 @@ GLIBC_2.2.5 __libc_stack_end D 0x8 GLIBC_2.2.5 _dl_mcount F GLIBC_2.2.5 _r_debug D 0x28 GLIBC_2.3 __tls_get_addr F +GLIBC_2.35 _dl_find_eh_frame F diff --git a/sysdeps/unix/sysv/linux/x86_64/x32/ld.abilist b/sysdeps/unix/sysv/linux/x86_64/x32/ld.abilist index c70bccf782..99bd4f5197 100644 --- a/sysdeps/unix/sysv/linux/x86_64/x32/ld.abilist +++ b/sysdeps/unix/sysv/linux/x86_64/x32/ld.abilist @@ -2,3 +2,4 @@ GLIBC_2.16 __libc_stack_end D 0x4 GLIBC_2.16 __tls_get_addr F GLIBC_2.16 _dl_mcount F GLIBC_2.16 _r_debug D 0x14 +GLIBC_2.35 _dl_find_eh_frame F