From patchwork Sun Feb 2 21:12:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105874 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 9FB843858408 for ; Sun, 2 Feb 2025 21:15:32 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 9FB843858408 Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=KK+eSyb2 X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTP id BE8223858C54 for ; Sun, 2 Feb 2025 21:12:55 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org BE8223858C54 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org BE8223858C54 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530775; cv=none; b=NyztXrvWDaX3M9TCXB8WZCWzaBg4Bu9lpAgDprP1Xa9rZusplv4Zq4NZeVue4/jmKGCZrNxPxrhs1WoF6tQLiu/cdkrLWGOaGXRjCbU530vlb9+SySyKF3D0n4oP7yX9VKO1kO7auWji7BZzxE+3gEG9CMh9lmGgPzz+Fuz+enY= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530775; c=relaxed/simple; bh=EHhihUiX5MBQf01KvdtnW7JHNYhAnJvT16guLdEgz8U=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=Cjj8AY6GhMn25VynA3ZiFCjqsEKziIgC28FEumxC42tc1HpDVY1Zy9HSnHOTduNt8SzMF1m70pnkbBEEFkDswq60EPTePFB+7ANm+mmB21dXVWT2sV+9d/HJmoHXFbMYWa2/nX9iwQ83crjwLw1ZZi7nk41dofy0zFnJA9sIOUc= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org BE8223858C54 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530775; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=68tMdtRlKSUzbnNikSdaDZiwFTG9JznP78KQ5syKBtA=; b=KK+eSyb2jXtGggxESyWTgdNX3QQoHz3OpzkJkd4xMFngB0HKvHsScf9XafFrAtDtYo8SL9 SUZxjdTDfpHa71dJ8pNUymwE4sVAa47uN34ilKcFT9LHWnRp/BfpQhW3voJv60XoSWroNi Mr+KNXlQ0ALcK3GgSs+yyvw77YY33fw= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-269-nLmZ1apQPD-uXLb06bXTHQ-1; Sun, 02 Feb 2025 16:12:54 -0500 X-MC-Unique: nLmZ1apQPD-uXLb06bXTHQ-1 X-Mimecast-MFC-AGG-ID: nLmZ1apQPD-uXLb06bXTHQ Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A9C77180035F for ; Sun, 2 Feb 2025 21:12:53 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id B7E6B19560AA for ; Sun, 2 Feb 2025 21:12:52 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 01/14] elf: Default to ENOENT error in _dl_map_new_object In-Reply-To: Message-ID: References: X-From-Line: b8af950b2ffb80d5cda19e67ad89730236203b02 Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:12:49 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: JkEtEHMAoWerdGlcg62Xk636h1yx7xPI19GKsNn2_xw_1738530773 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-11.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org The errno leak was originally observed as elf/tst-rtld-does-not-exist, elf/tst-rtld-dash-dash test failures on an AArch64 system without protection keys support: the ENOSPC error from the pkey_alloc system call leaked into the printed error message, causing the test failure. Without the pkey_alloc call, errno is 0. Setting ENOENT unconditionally still changes the error message, so update the test expectations accordingly. Reviewed-by: Adhemerval Zanella --- elf/dl-load.c | 5 +++++ elf/tst-rtld-dash-dash.sh | 2 +- elf/tst-rtld-does-not-exist.sh | 2 +- 3 files changed, 7 insertions(+), 2 deletions(-) diff --git a/elf/dl-load.c b/elf/dl-load.c index 4998652adf..e328d678b4 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -1939,6 +1939,11 @@ _dl_map_new_object (struct link_map *loader, const char *name, /* Will be true if we found a DSO which is of the other ELF class. */ bool found_other_class = false; + /* Default errno to ENOENT. The code may not end up doing any + system calls before setting fd to -1 because there are no search + paths. */ + __set_errno (ENOENT); + #ifdef SHARED /* Give the auditing libraries a chance to change the name before we try anything. */ diff --git a/elf/tst-rtld-dash-dash.sh b/elf/tst-rtld-dash-dash.sh index 5b00110d38..cf31552168 100644 --- a/elf/tst-rtld-dash-dash.sh +++ b/elf/tst-rtld-dash-dash.sh @@ -31,7 +31,7 @@ echo "output (with expected error):" cat "$tmp_out" if test $status -eq 127 \ - && grep -q "^--program-does-not-exist: error while loading shared libraries: --program-does-not-exist: cannot open shared object file$" "$tmp_out" \ + && grep -q "^--program-does-not-exist: error while loading shared libraries: --program-does-not-exist: cannot open shared object file: No such file or directory$" "$tmp_out" \ && test "$(wc -l < "$tmp_out")" -eq 1 ; then status=0 else diff --git a/elf/tst-rtld-does-not-exist.sh b/elf/tst-rtld-does-not-exist.sh index 9f404aecbf..418cfb445e 100644 --- a/elf/tst-rtld-does-not-exist.sh +++ b/elf/tst-rtld-does-not-exist.sh @@ -31,7 +31,7 @@ echo "output (with expected error):" cat "$tmp_out" if test $status -eq 127 \ - && grep -q "^program-does-not-exist: error while loading shared libraries: program-does-not-exist: cannot open shared object file$" "$tmp_out" \ + && grep -q "^program-does-not-exist: error while loading shared libraries: program-does-not-exist: cannot open shared object file: No such file or directory$" "$tmp_out" \ && test "$(wc -l < "$tmp_out")" -eq 1 ; then status=0 else From patchwork Sun Feb 2 21:12:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105877 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id C4F983858C78 for ; Sun, 2 Feb 2025 21:16:00 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org C4F983858C78 Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=RoBM4Sd0 X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTP id 0C6523858C50 for ; Sun, 2 Feb 2025 21:13:02 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 0C6523858C50 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 0C6523858C50 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530782; cv=none; b=ta5fJaPGwmr11fDcCCXyd+HV6NDbQfNu+pJ/7IJvfmWAoSF6gaWkiOycmGRGtVWwyi3rJ0u24J4jqr0lwMnTQaHoOsGSTBJA2du/ArhqKNVXrMYnIEJ5b1i37IIt7772iSNC7kf3BMsNRA3eRej4q7h2Ny9aLlZ6V7zIeSNhhmg= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530782; c=relaxed/simple; bh=DlsOv+YoemDWSYvR0QDOZf0Qy4WaBs7G4neITJwVuto=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=Y5gKM6PYm54de44vz/1mFs07b/P32xWGZY7ZVrwLGQyGqEcPR9pCeIEkHHuvOniRLYP++S8mdD3HCMeMFs8mXr7sZQFvh7klOYilKf/tlijcifh/Pik3uUr/tE8pBE+ZgJyA4DfFuyIVtBaQci7SXbGv/n2iz26Bz/mZGQTMXwk= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 0C6523858C50 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530781; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=nXBfZtHxOnWxPg3wBEF7WtVjuIHe3aX1Mbw2Ry6dcUE=; b=RoBM4Sd0yG16zumuGF5NSa/5LDJf2/O3DvFdZWDPLSwgo3lhN/JtdhebbaSpyThf3zhmXp EjTGRxvUdrZ6Um3Zy2CWXUr4VN3yvnldURNktF0COmui6mUEXLXOtoG2Ol03M5IsQDLLYP HhwEP4fv7Mjb6R/CA/m1sYRb/zubLBg= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-449-mF20THmHNLiXSXYNsb5-3A-1; Sun, 02 Feb 2025 16:13:00 -0500 X-MC-Unique: mF20THmHNLiXSXYNsb5-3A-1 X-Mimecast-MFC-AGG-ID: mF20THmHNLiXSXYNsb5-3A Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9E2EA195608A for ; Sun, 2 Feb 2025 21:12:59 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A996F3003FD3 for ; Sun, 2 Feb 2025 21:12:58 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 02/14] aarch64: Enable internal use of memory protection keys In-Reply-To: Message-ID: <54aac713a1f1c3d25c15424e0a1245e5477aea95.1738530302.git.fweimer@redhat.com> References: X-From-Line: 54aac713a1f1c3d25c15424e0a1245e5477aea95 Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:12:55 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: -1NU3CU69WgYNQQP07T3o1mrIdTARJ6GE99oWUDDU2A_1738530779 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-11.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org This adds hidden prototypes to align with commit 7e21a65c58cc91b3ba ("misc: Enable internal use of memory protection keys"). Reviewed-by: Adhemerval Zanella --- sysdeps/unix/sysv/linux/aarch64/pkey_get.c | 4 +++- sysdeps/unix/sysv/linux/aarch64/pkey_set.c | 4 +++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/sysdeps/unix/sysv/linux/aarch64/pkey_get.c b/sysdeps/unix/sysv/linux/aarch64/pkey_get.c index 2dd9d8165e..73921e21e3 100644 --- a/sysdeps/unix/sysv/linux/aarch64/pkey_get.c +++ b/sysdeps/unix/sysv/linux/aarch64/pkey_get.c @@ -21,7 +21,7 @@ #include int -pkey_get (int key) +__pkey_get (int key) { if (key < 0 || key > 15) { @@ -71,3 +71,5 @@ pkey_get (int key) return PKEY_DISABLE_ACCESS; } +libc_hidden_def (__pkey_get) +weak_alias (__pkey_get, pkey_get) diff --git a/sysdeps/unix/sysv/linux/aarch64/pkey_set.c b/sysdeps/unix/sysv/linux/aarch64/pkey_set.c index a521cc00da..45a4992997 100644 --- a/sysdeps/unix/sysv/linux/aarch64/pkey_set.c +++ b/sysdeps/unix/sysv/linux/aarch64/pkey_set.c @@ -24,7 +24,7 @@ PKEY_DISABLE_WRITE | PKEY_DISABLE_EXECUTE | PKEY_DISABLE_READ) int -pkey_set (int key, unsigned int restrictions) +__pkey_set (int key, unsigned int restrictions) { if (key < 0 || key > 15 || restrictions > MAX_PKEY_RIGHTS) { @@ -111,3 +111,5 @@ pkey_set (int key, unsigned int restrictions) pkey_write (por_el0); return 0; } +libc_hidden_def (__pkey_set) +weak_alias (__pkey_set, pkey_set) From patchwork Sun Feb 2 21:13:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105881 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 8321E3858C5F for ; Sun, 2 Feb 2025 21:20:42 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 8321E3858C5F Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=KLNAr0N+ X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTP id AA1383858C41 for ; Sun, 2 Feb 2025 21:13:07 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org AA1383858C41 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org AA1383858C41 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530787; cv=none; b=lzv/kZQPrF5mEkgcQqFjqI1bdavUDYLWpSfCsuY56YDtmFsunvrsaaqLptVLK0Rhk/4Xp3kof43P7gS93gol7U+BiVe8TgFSIkEWvr45SZdyR525P6++M0CYb1mXq4WVPOhHGuroCSnB2TCUbRo24Kgqj8+7gOCr5lUfROsZ3wo= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530787; c=relaxed/simple; bh=wgjtlFLn9SSaQGypqTIAEpxMaWYHDnPyl3IJlTyAxWE=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=nYu2DzDIjlJjs20rFAXxhbTMO3C8s3n9T1rL5KPMMUgRSY2vlAlT/CMwtcgsVB9tN73aFeXygEc9dP6ygN5eU6P/5UnXnLfr37lARzFdb7Nj7uCwOHVyDF06EO9ZZOQrl36Le37LRI86/hkxGjFSv9lYpPi/rIEImS6cGnR6F9w= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org AA1383858C41 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530787; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=fl/WhZaugZ7CJXdUak4qEZn8+RD5z6dk1cLKHr6iGGM=; b=KLNAr0N+CtCtv5+dil0XOOOZqWz6zqR1qC1HkH02dB7konAEdsd+IDJt+9/k8NUk9YPV9P h51iEtlH8L5+EK2d9MYsukP+6t23Kb95ijq6UZNfzRYIXeHmVeeg8nAVKPL8y0NNfTaJLS I7MwhPOWNm62irvBoq83+wb3PA+7Teg= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-659-YGfe9Q0HMH2B10dCgyCP6A-1; Sun, 02 Feb 2025 16:13:06 -0500 X-MC-Unique: YGfe9Q0HMH2B10dCgyCP6A-1 X-Mimecast-MFC-AGG-ID: YGfe9Q0HMH2B10dCgyCP6A Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 6012A19560AF for ; Sun, 2 Feb 2025 21:13:05 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 8309B19560AA for ; Sun, 2 Feb 2025 21:13:04 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 03/14] Do not export functions from libc In-Reply-To: Message-ID: <7c5ab9f9daddd62085ced18e145598778d370175.1738530302.git.fweimer@redhat.com> References: X-From-Line: 7c5ab9f9daddd62085ced18e145598778d370175 Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:13:01 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: w2a2REaaL3S04ILtlM6IZvQlE2EZz5uFKq2MHcfxnzc_1738530785 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-11.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org With the merge of NSS service modules into libc, external users are gone except in tests. To enable tests that use these functions (and the unit tests in malloc/tst-alloc_buffer.h), add a copy of these functions to libsupport. For that to work, do not call __snprintf in __libc_alloc_buffer_create_failure, which is not very useful and unavailable outside libc. All parameters to this function are now unused, so remove them. This also enables future use of this functionality from ld.so (using a separately built copy). Reviewed-by: Adhemerval Zanella --- include/alloc_buffer.h | 27 ++++++--------------------- malloc/Makefile | 6 ++++-- malloc/Versions | 7 ------- malloc/alloc_buffer_alloc_array.c | 1 - malloc/alloc_buffer_allocate.c | 1 - malloc/alloc_buffer_copy_bytes.c | 1 - malloc/alloc_buffer_copy_string.c | 1 - malloc/alloc_buffer_create_failure.c | 9 ++------- malloc/tst-alloc_buffer.c | 4 ++++ nss/Makefile | 4 ++-- support/Makefile | 1 + support/support-alloc_buffer.c | 26 ++++++++++++++++++++++++++ 12 files changed, 45 insertions(+), 43 deletions(-) create mode 100644 support/support-alloc_buffer.c diff --git a/include/alloc_buffer.h b/include/alloc_buffer.h index 54b94e66b8..6c6201d385 100644 --- a/include/alloc_buffer.h +++ b/include/alloc_buffer.h @@ -113,10 +113,7 @@ enum }; /* Internal function. Terminate the process using __libc_fatal. */ -void __libc_alloc_buffer_create_failure (void *start, size_t size); -#ifndef _ISOMAC -libc_hidden_proto (__libc_alloc_buffer_create_failure) -#endif +void __libc_alloc_buffer_create_failure (void) attribute_hidden; /* Create a new allocation buffer. The byte range from START to START + SIZE - 1 must be valid, and the allocation buffer allocates @@ -128,16 +125,13 @@ alloc_buffer_create (void *start, size_t size) uintptr_t current = (uintptr_t) start; uintptr_t end = (uintptr_t) start + size; if (end < current) - __libc_alloc_buffer_create_failure (start, size); + __libc_alloc_buffer_create_failure (); return (struct alloc_buffer) { current, end }; } /* Internal function. See alloc_buffer_allocate below. */ struct alloc_buffer __libc_alloc_buffer_allocate (size_t size, void **pptr) - __attribute__ ((nonnull (2))); -#ifndef _ISOMAC -libc_hidden_proto (__libc_alloc_buffer_allocate) -#endif + attribute_hidden __attribute__ ((nonnull (2))); /* Allocate a buffer of SIZE bytes using malloc. The returned buffer is in a failed state if malloc fails. *PPTR points to the start of @@ -338,10 +332,7 @@ __alloc_buffer_next (struct alloc_buffer *buf, size_t align) void * __libc_alloc_buffer_alloc_array (struct alloc_buffer *buf, size_t size, size_t align, size_t count) - __attribute__ ((nonnull (1))); -#ifndef _ISOMAC -libc_hidden_proto (__libc_alloc_buffer_alloc_array) -#endif + attribute_hidden __attribute__ ((nonnull (1))); /* Obtain a TYPE * pointer to an array of COUNT objects in BUF of TYPE. Consume these bytes from the buffer. Return NULL and mark @@ -357,10 +348,7 @@ libc_hidden_proto (__libc_alloc_buffer_alloc_array) /* Internal function. See alloc_buffer_copy_bytes below. */ struct alloc_buffer __libc_alloc_buffer_copy_bytes (struct alloc_buffer, const void *, size_t) - __attribute__ ((nonnull (2))); -#ifndef _ISOMAC -libc_hidden_proto (__libc_alloc_buffer_copy_bytes) -#endif + attribute_hidden __attribute__ ((nonnull (2))); /* Copy SIZE bytes starting at SRC into the buffer. If there is not enough room in the buffer, the buffer is marked as failed. No @@ -374,10 +362,7 @@ alloc_buffer_copy_bytes (struct alloc_buffer *buf, const void *src, size_t size) /* Internal function. See alloc_buffer_copy_string below. */ struct alloc_buffer __libc_alloc_buffer_copy_string (struct alloc_buffer, const char *) - __attribute__ ((nonnull (2))); -#ifndef _ISOMAC -libc_hidden_proto (__libc_alloc_buffer_copy_string) -#endif + attribute_hidden __attribute__ ((nonnull (2))); /* Copy the string at SRC into the buffer, including its null terminator. If there is not enough room in the buffer, the buffer diff --git a/malloc/Makefile b/malloc/Makefile index e2b2c1ae1b..bd530d7f72 100644 --- a/malloc/Makefile +++ b/malloc/Makefile @@ -30,7 +30,6 @@ tests := \ tst-aligned-alloc-random \ tst-aligned-alloc-random-thread \ tst-aligned-alloc-random-thread-cross \ - tst-alloc_buffer \ tst-calloc \ tst-free-errno \ tst-interpose-nothread \ @@ -83,7 +82,10 @@ tests += \ # tests endif -tests-internal := tst-scratch_buffer +tests-internal := \ + tst-alloc_buffer \ + tst-scratch_buffer \ + # tests-internal # The dynarray framework is only available inside glibc. tests-internal += \ diff --git a/malloc/Versions b/malloc/Versions index c763395c6d..011b6a5a85 100644 --- a/malloc/Versions +++ b/malloc/Versions @@ -88,13 +88,6 @@ libc { __libc_dynarray_finalize; __libc_dynarray_resize; __libc_dynarray_resize_clear; - - # struct alloc_buffer support - __libc_alloc_buffer_alloc_array; - __libc_alloc_buffer_allocate; - __libc_alloc_buffer_copy_bytes; - __libc_alloc_buffer_copy_string; - __libc_alloc_buffer_create_failure; } } diff --git a/malloc/alloc_buffer_alloc_array.c b/malloc/alloc_buffer_alloc_array.c index 165033004d..60f693e843 100644 --- a/malloc/alloc_buffer_alloc_array.c +++ b/malloc/alloc_buffer_alloc_array.c @@ -43,4 +43,3 @@ __libc_alloc_buffer_alloc_array (struct alloc_buffer *buf, size_t element_size, return NULL; } } -libc_hidden_def (__libc_alloc_buffer_alloc_array) diff --git a/malloc/alloc_buffer_allocate.c b/malloc/alloc_buffer_allocate.c index f3f6fd7761..5ebd389664 100644 --- a/malloc/alloc_buffer_allocate.c +++ b/malloc/alloc_buffer_allocate.c @@ -33,4 +33,3 @@ __libc_alloc_buffer_allocate (size_t size, void **pptr) else return alloc_buffer_create (*pptr, size); } -libc_hidden_def (__libc_alloc_buffer_allocate) diff --git a/malloc/alloc_buffer_copy_bytes.c b/malloc/alloc_buffer_copy_bytes.c index 77f95374dc..79ce636f55 100644 --- a/malloc/alloc_buffer_copy_bytes.c +++ b/malloc/alloc_buffer_copy_bytes.c @@ -31,4 +31,3 @@ __libc_alloc_buffer_copy_bytes (struct alloc_buffer buf, memcpy (ptr, src, len); return buf; } -libc_hidden_def (__libc_alloc_buffer_copy_bytes) diff --git a/malloc/alloc_buffer_copy_string.c b/malloc/alloc_buffer_copy_string.c index 16068c7f60..b5c734ea6a 100644 --- a/malloc/alloc_buffer_copy_string.c +++ b/malloc/alloc_buffer_copy_string.c @@ -27,4 +27,3 @@ __libc_alloc_buffer_copy_string (struct alloc_buffer buf, const char *src) { return __libc_alloc_buffer_copy_bytes (buf, src, strlen (src) + 1); } -libc_hidden_def (__libc_alloc_buffer_copy_string) diff --git a/malloc/alloc_buffer_create_failure.c b/malloc/alloc_buffer_create_failure.c index cbafedc36d..42c58a3aa2 100644 --- a/malloc/alloc_buffer_create_failure.c +++ b/malloc/alloc_buffer_create_failure.c @@ -20,12 +20,7 @@ #include void -__libc_alloc_buffer_create_failure (void *start, size_t size) +__libc_alloc_buffer_create_failure (void) { - char buf[200]; - __snprintf (buf, sizeof (buf), "Fatal glibc error: " - "invalid allocation buffer of size %zu\n", - size); - __libc_fatal (buf); + __libc_fatal ("Fatal glibc error: invalid allocation buffer\n"); } -libc_hidden_def (__libc_alloc_buffer_create_failure) diff --git a/malloc/tst-alloc_buffer.c b/malloc/tst-alloc_buffer.c index 7c2a15ac90..f1ca4a1e32 100644 --- a/malloc/tst-alloc_buffer.c +++ b/malloc/tst-alloc_buffer.c @@ -16,6 +16,10 @@ License along with the GNU C Library; if not, see . */ +/* Note: This test exercises the (identical) copy of the + in libsupport, not libc.so, because the latter has + hidden visibility and cannot be tested from the outside. */ + #include #include #include diff --git a/nss/Makefile b/nss/Makefile index 3ee51f309e..91d1bf2c4d 100644 --- a/nss/Makefile +++ b/nss/Makefile @@ -475,9 +475,9 @@ libof-nss_test1 = extramodules libof-nss_test2 = extramodules libof-nss_test_errno = extramodules libof-nss_test_gai_hv2_canonname = extramodules -$(objpfx)/libnss_test1.so: $(objpfx)nss_test1.os $(link-libc-deps) +$(objpfx)/libnss_test1.so: $(objpfx)nss_test1.os $(libsupport) $(link-libc-deps) $(build-module) -$(objpfx)/libnss_test2.so: $(objpfx)nss_test2.os $(link-libc-deps) +$(objpfx)/libnss_test2.so: $(objpfx)nss_test2.os $(libsupport) $(link-libc-deps) $(build-module) $(objpfx)/libnss_test_errno.so: $(objpfx)nss_test_errno.os $(link-libc-deps) $(build-module) diff --git a/support/Makefile b/support/Makefile index 59a9974539..c55553c0cc 100644 --- a/support/Makefile +++ b/support/Makefile @@ -41,6 +41,7 @@ libsupport-routines = \ resolv_response_context_free \ resolv_test \ set_fortify_handler \ + support-alloc_buffer \ support-open-dev-null-range \ support_become_root \ support_can_chroot \ diff --git a/support/support-alloc_buffer.c b/support/support-alloc_buffer.c new file mode 100644 index 0000000000..71ffb703c5 --- /dev/null +++ b/support/support-alloc_buffer.c @@ -0,0 +1,26 @@ +/* Make available to tests. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* The implementation in libc.so has hidden visibility and is + therefore not usable. */ + +#include +#include +#include +#include +#include From patchwork Sun Feb 2 21:13:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105875 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 8207D3858C50 for ; Sun, 2 Feb 2025 21:15:37 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 8207D3858C50 Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=NBLKRyy9 X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTP id 28A8D3858C31 for ; Sun, 2 Feb 2025 21:13:13 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 28A8D3858C31 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 28A8D3858C31 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530793; cv=none; b=GIZDH2K2lTOVHyPLxiccoh75RvAFvLw37JqaSZKafsrBOGdXLPrWT31z7lwBwCx6exnG0LR4UXLQ4ii5xePrGZWUvIUiWweZYRFW/oAhNJcrB0t2xyf27rKnmasMLKFR7bjHJtsy0qA8TBCODRWawE7c1sl930/IzucNjTKrwYE= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530793; c=relaxed/simple; bh=TwvdUjAiw8MkMJ+NzKRveswZnEhMhsCvKiY+no4sziI=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=h8Ubx8Uv5NarP4teqVVW817FBQpTPRrJp/+3+S52g4T0STYBanqOPxwrqjvR0kD/gsLOwo4JVJSdz0tqSEv+GgpT2stJ9hu38GzLEQASiKxuK1w4c2jTQul6zaYqCPavj/MxdXK1XRdg7gn6CujjlZTK2Fwj7Mx7hKGXY9gpdoY= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 28A8D3858C31 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530792; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=hoF/eWUuM/FrwriYPgvuw/L+Xh6W2GCpNn24DjNfTKw=; b=NBLKRyy99hfTJ7wQ1yaKlbC62J9E0l4oPndaJOYNKGEvUhFHZYwi/WWHgpro+TyJLld+ws w7awqQOVWGgRr+t8R+BofC51H1vNKSEpwgDQpApRtcYBoRE12OEbUGBckFruG74QIT8+pt 2S92zXkFCPODAZ4xytV4hGihz5SQru4= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-633-XbVXNXgDPzyLpaO_MBsyEg-1; Sun, 02 Feb 2025 16:13:11 -0500 X-MC-Unique: XbVXNXgDPzyLpaO_MBsyEg-1 X-Mimecast-MFC-AGG-ID: XbVXNXgDPzyLpaO_MBsyEg Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C5E6A19560AA for ; Sun, 2 Feb 2025 21:13:10 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id EEED119560AA for ; Sun, 2 Feb 2025 21:13:09 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 04/14] support: Add for protection flags probing In-Reply-To: Message-ID: <812676821365b1df51b816e39b3460e7d97c62f2.1738530302.git.fweimer@redhat.com> References: X-From-Line: 812676821365b1df51b816e39b3460e7d97c62f2 Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:13:07 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: oDBu2rOI1HIEz5T_i-_It__KZKSg7kntVO_mlZli6W8_1738530790 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-11.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org --- support/Makefile | 2 + support/memprobe.h | 43 ++++++ support/support_memprobe.c | 251 +++++++++++++++++++++++++++++++++ support/tst-support_memprobe.c | 111 +++++++++++++++ 4 files changed, 407 insertions(+) create mode 100644 support/memprobe.h create mode 100644 support/support_memprobe.c create mode 100644 support/tst-support_memprobe.c diff --git a/support/Makefile b/support/Makefile index c55553c0cc..23394837d3 100644 --- a/support/Makefile +++ b/support/Makefile @@ -67,6 +67,7 @@ libsupport-routines = \ support_format_netent \ support_fuse \ support_isolate_in_subprocess \ + support_memprobe \ support_mutex_pi_monotonic \ support_need_proc \ support_open_and_compare_file_bytes \ @@ -332,6 +333,7 @@ tests = \ tst-support_descriptors \ tst-support_format_dns_packet \ tst-support_fuse \ + tst-support_memprobe \ tst-support_quote_blob \ tst-support_quote_blob_wide \ tst-support_quote_string \ diff --git a/support/memprobe.h b/support/memprobe.h new file mode 100644 index 0000000000..4a680de4a7 --- /dev/null +++ b/support/memprobe.h @@ -0,0 +1,43 @@ +/* Probing memory for protection state. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef SUPPORT_MEMPROBE_H +#define SUPPORT_MEMPROBE_H + +/* Probe access status of memory ranges. These functions record a + failure (but do not terminate the process) if the memory range does + not match the expected protection flags. */ + +#include + +/* Asserts that SIZE bytes at ADDRESS are inaccessible. CONTEXT + is used for reporting errors. */ +void support_memprobe_noaccess (const char *context, const void *address, + size_t size); + +/* Asserts that SIZE bytes at ADDRESS read read-only. CONTEXT is used + for reporting errors. */ +void support_memprobe_readonly (const char *context, const void *address, + size_t size); + +/* Asserts that SIZE bytes at ADDRESS are readable and writable. + CONTEXT is used for reporting errors. */ +void support_memprobe_readwrite (const char *context, const void *address, + size_t size); + +#endif /* SUPPORT_MEMPROBE_H */ diff --git a/support/support_memprobe.c b/support/support_memprobe.c new file mode 100644 index 0000000000..ad559abc2c --- /dev/null +++ b/support/support_memprobe.c @@ -0,0 +1,251 @@ +/* Probing memory for protection state. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* The implementation uses vfork for probing. As a result, it can be + used for testing page protections controlled by memory protection + keys, despite their problematic interaction with signal handlers + (bug 22396). */ + +#include + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#ifdef __linux__ +# include +#endif + +/* Make are more complete attempt to disable core dumps, even in the + presence of core catchers that ignore RLIMIT_CORE. Used after + vfork. */ +static void +disable_coredumps (void) +{ +#ifdef __linux__ + prctl (PR_SET_DUMPABLE, 0 /* SUID_DUMP_DISABLE */, 0, 0); +#endif + struct rlimit rl = {}; + setrlimit (RLIMIT_CORE, &rl); +} + +/* Restores all signals to SIG_DFL and unblocks them. */ +static void +memprobe_sig_dfl_unblock (void) +{ + for (int sig = 1; sig < _NSIG; ++sig) + /* Ignore errors for those signals whose handler cannot be changed. */ + (void) signal (sig, SIG_DFL); + sigset_t sigallset; + sigfillset (&sigallset); + sigprocmask (SIG_UNBLOCK, &sigallset, NULL); +} + +/* Performs a 4-byte probe at the address aligned down. The internal + glibc atomics do not necessarily support one-byte access. + Accessing more bytes with a no-op write results in the same page + fault effects because of the alignment. */ +static inline void +write_probe_at (volatile char *address) +{ + /* Used as an argument to force the compiler to emit an actual no-op + atomic instruction. */ + static volatile uint32_t zero = 0; + uint32_t *ptr = (uint32_t *) ((uintptr_t) address & ~(uintptr_t) 3); + atomic_fetch_add_relaxed (ptr, zero); +} + +/* Attempt to read or write the entire range in one go. If DO_WRITE, + perform a no-op write with an atomic OR with a zero second operand, + otherwise just a read. */ +static void +memprobe_expect_access (const char *context, volatile char *address, + size_t size, volatile size_t *pindex, bool do_write) +{ + pid_t pid = vfork (); + TEST_VERIFY_EXIT (pid >= 0); + if (pid == 0) + { + memprobe_sig_dfl_unblock (); + disable_coredumps (); + /* *pindex is a volatile access, so the parent process can read + the correct index after an unexpected fault. */ + if (do_write) + for (*pindex = 0; *pindex < size; *pindex += 4) + write_probe_at (address + *pindex); + else + for (*pindex = 0; *pindex < size; *pindex += 1) + address[*pindex]; /* Triggers volatile read. */ + _exit (0); + } + int status; + xwaitpid (pid, &status, 0); + if (*pindex < size) + { + support_record_failure (); + printf ("error: %s: unexpected %s fault at address %p" + " (%zu bytes after %p, wait status %d)\n", + context, do_write ? "write" : "read", address + *pindex, + *pindex, address, status); + } + else + { + TEST_VERIFY (WIFEXITED (status)); + TEST_COMPARE (WEXITSTATUS (status), 0); + } +} + +/* Probe one byte for lack of access. Attempt a write for DO_WRITE, + otherwise a read. Returns false on failure. */ +static bool +memprobe_expect_noaccess_1 (const char *context, volatile char *address, + size_t size, size_t index, bool do_write) +{ + pid_t pid = vfork (); + TEST_VERIFY_EXIT (pid >= 0); + if (pid == 0) + { + memprobe_sig_dfl_unblock (); + disable_coredumps (); + if (do_write) + write_probe_at (address + index); + else + address[index]; /* Triggers volatile read. */ + _exit (0); /* Should not be executed due to fault. */ + } + + int status; + xwaitpid (pid, &status, 0); + if (WIFSIGNALED (status)) + { + /* Accept SIGSEGV or SIGBUS. */ + if (WTERMSIG (status) != SIGSEGV) + TEST_COMPARE (WTERMSIG (status), SIGBUS); + } + else + { + support_record_failure (); + printf ("error: %s: unexpected %s success at address %p" + " (%zu bytes after %p, wait status %d)\n", + context, do_write ? "write" : "read", address + index, + index, address, status); + return false; + } + return true; +} + +/* Probe each byte individually because we expect a fault. + + The implementation skips over bytes on the same page, so it assumes + that the subpage_prot system call is not used. */ +static void +memprobe_expect_noaccess (const char *context, volatile char *address, + size_t size, bool do_write) +{ + if (size == 0) + return; + + if (!memprobe_expect_noaccess_1 (context, address, size, 0, do_write)) + return; + + /* Round up to the next page. */ + long int page_size = sysconf (_SC_PAGE_SIZE); + TEST_VERIFY_EXIT (page_size > 0); + size_t index; + { + uintptr_t next_page = roundup ((uintptr_t) address, page_size); + if (next_page < (uintptr_t) address + || next_page >= (uintptr_t) address + size) + /* Wrap around or after the end of the region. */ + return; + index = next_page - (uintptr_t) address; + } + + /* Probe in page increments. */ + while (true) + { + if (!memprobe_expect_noaccess_1 (context, address, size, index, + do_write)) + break; + size_t next_index = index + page_size; + if (next_index < index || next_index >= size) + /* Wrap around or after the end of the region. */ + break; + index = next_index; + } +} + +static void +memprobe_range (const char *context, volatile char *address, size_t size, + bool expect_read, bool expect_write) +{ + /* Do not rely on the sharing nature of vfork because it could be + implemented as fork. */ + size_t *pindex = support_shared_allocate (sizeof *pindex); + + sigset_t oldset; + { + sigset_t sigallset; + sigfillset (&sigallset); + sigprocmask (SIG_BLOCK, &sigallset, &oldset); + } + + if (expect_read) + { + memprobe_expect_access (context, address, size, pindex, false); + if (expect_write) + memprobe_expect_access (context, address, size, pindex, true); + else + memprobe_expect_noaccess (context, address, size, true); + } + else + { + memprobe_expect_noaccess (context, address, size, false); + TEST_VERIFY (!expect_write); /* Write-only probing not supported. */ + } + + sigprocmask (SIG_SETMASK, NULL, &oldset); + support_shared_free (pindex); +} + +void support_memprobe_noaccess (const char *context, const void *address, + size_t size) +{ + memprobe_range (context, (volatile char *) address, size, false, false); +} + +void support_memprobe_readonly (const char *context, const void *address, + size_t size) +{ + memprobe_range (context, (volatile char *) address, size, true, false); +} + +void support_memprobe_readwrite (const char *context, const void *address, + size_t size) +{ + memprobe_range (context, (volatile char *) address, size, true, true); +} diff --git a/support/tst-support_memprobe.c b/support/tst-support_memprobe.c new file mode 100644 index 0000000000..586e09d563 --- /dev/null +++ b/support/tst-support_memprobe.c @@ -0,0 +1,111 @@ +/* Tests for . + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include + +#include +#include +#include +#include + +/* Expect a failed state in the test harness. */ +static void +expect_failure (const char *context) +{ + if (!support_record_failure_is_failed ()) + { + printf ("error: expected failure missing: %s\n", context); + exit (1); + } + support_record_failure_reset (); +} + +static int +do_test (void) +{ + static char rw_byte = 1; + support_memprobe_readwrite ("rw_byte", &rw_byte, 1); + support_record_failure_barrier (); + + puts ("info: expected error for read-only to rw_byte"); + support_memprobe_readonly ("rw_byte", &rw_byte, 1); + expect_failure ("read-only rw_byte"); + + puts ("info: expected error for no-access to rw_byte"); + support_memprobe_noaccess ("rw_byte", &rw_byte, 1); + expect_failure ("no-access rw_byte"); + + static const char const_byte = 1; + support_memprobe_readonly ("const_byte", &const_byte, 1); + support_record_failure_barrier (); + + puts ("info: expected error for no-access to const_byte"); + support_memprobe_noaccess ("const_byte", &const_byte, 1); + expect_failure ("no-access const_byte"); + + puts ("info: expected error for read-write access to const_byte"); + support_memprobe_readwrite ("const_byte", &const_byte, 1); + expect_failure ("read-write const_byte"); + + struct support_next_to_fault ntf = support_next_to_fault_allocate (3); + void *ntf_trailing = ntf.buffer + ntf.length; + + /* The initial 3 bytes are accessible. */ + support_memprobe_readwrite ("ntf init", ntf.buffer, ntf.length); + support_record_failure_barrier (); + + puts ("info: expected error for read-only to ntf init"); + support_memprobe_readonly ("ntf init", ntf.buffer, ntf.length); + expect_failure ("read-only ntf init"); + + puts ("info: expected error for no-access to ntf init"); + support_memprobe_noaccess ("ntf init", ntf.buffer, ntf.length); + expect_failure ("no-access ntf init"); + + /* The trailing part after the allocated area is inaccessible. */ + support_memprobe_noaccess ("ntf trailing", ntf_trailing, 1); + support_record_failure_barrier (); + + puts ("info: expected error for read-only to ntf trailing"); + support_memprobe_readonly ("ntf trailing", ntf_trailing, 1); + expect_failure ("read-only ntf trailing"); + + puts ("info: expected error for no-access to ntf trailing"); + support_memprobe_readwrite ("ntf trailing", ntf_trailing, 1); + expect_failure ("read-write ntf trailing"); + + /* Both areas combined fail all checks due to inconsistent results. */ + puts ("info: expected error for no-access to ntf overlap"); + support_memprobe_noaccess ("ntf overlap ", ntf.buffer, ntf.length + 1); + expect_failure ("no-access ntf overlap"); + + puts ("info: expected error for read-only to ntf overlap"); + support_memprobe_readonly ("ntf overlap", ntf.buffer, ntf.length + 1); + expect_failure ("read-only ntf overlap"); + + puts ("info: expected error for read-write to ntf overlap"); + support_memprobe_readwrite ("ntf overlap", ntf.buffer, ntf.length + 1); + expect_failure ("read-write ntf overlap"); + + + support_next_to_fault_free (&ntf); + + return 0; +} + +#include From patchwork Sun Feb 2 21:13:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105876 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 52BBA3858C60 for ; Sun, 2 Feb 2025 21:15:52 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 52BBA3858C60 Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=ZdrMvLmz X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTP id 75A0B3858D38 for ; Sun, 2 Feb 2025 21:13:19 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 75A0B3858D38 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 75A0B3858D38 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530799; cv=none; b=vZQYqQxk5gvuGF/ySNGVyh/K5QdJDjFVe+angUzRepv35UjYrqMB+e4lcaYuvWwaULs+kxEOhHf3EI6QVmUnyD8FakYDqg4XZqf1UQmIikOcFQx9EhUROX/n/hlkLG+IEPUSCp7giU7HoctZVSChA+/iHF4d4p6qF6SCzrlKnB8= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530799; c=relaxed/simple; bh=ASvezrEZ5+vsWJ+InYHkTw96dW08dYW9/F/WQJ8z3XM=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=dFmKFKLrd/BORkxs4b+B43NUVluQEp++oCLVqleT3I2yjt0pIVNCKWAouZvotCVgN2W/tuCsbN5zQUr1qMbcvq3i4A+BK7lgISwz6GyVkBAj0Y5y1KmoncnQkZTMaxL1mTFQxO+2uRkceUYfjDyBLtMwotvWKvS9R/+keYKo8p0= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 75A0B3858D38 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530799; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=0ehu1T5nVUrOkTf3zd6goSX027AOwy7KY90tnGOnZ6M=; b=ZdrMvLmzv9F9/1ohPrr1BXJI28kWbziaHhPLkZtNscThT0UyleiyknrOVN7THOp62Aq8Kq o3Tves9MuHbKpV+svaNIML7dV5XSwEuZnOrcnuvVkc9gwSmLvBX8QZwWCq0yG3OhvV0C8k Xmc0Q5gKtSZge+W8+RuT3Vxz5JjmDLQ= Received: from mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-31-B64kgAR4PpKRoZRDqd2vtA-1; Sun, 02 Feb 2025 16:13:17 -0500 X-MC-Unique: B64kgAR4PpKRoZRDqd2vtA-1 X-Mimecast-MFC-AGG-ID: B64kgAR4PpKRoZRDqd2vtA Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9A57B19560A3 for ; Sun, 2 Feb 2025 21:13:16 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A98991956094 for ; Sun, 2 Feb 2025 21:13:15 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 05/14] elf: Eliminate second loop in find_version in dl-version.c In-Reply-To: Message-ID: <7a0383ffaa70e504c402b94a08d66424b45be1f3.1738530302.git.fweimer@redhat.com> References: X-From-Line: 7a0383ffaa70e504c402b94a08d66424b45be1f3 Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:13:12 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: ouOOSUuwaI6At7cs1L8jP1-ho6VI0TSk7B5YpgSjadI_1738530796 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-11.2 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, PROLO_LEO1, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org The first loop iterates through all objects in the namespace because _dl_check_map_versions is called after the loaded objects have been added to the list. (This list is not limited by symbol search scope.) Turn the assert in _dl_check_map_versions into a proper error because it can be triggered by inconsistent variants of shared objects. This assert could fail if the soname in the vn_file field of the verneed structure for a version is not among the DT_NEEDED dependencies of an object. With such a discrepancy, the no matching object might be loaded, hence the assertion failure. Current binutils ld does not seem to produce such objects, preferring to create unversioned symbols instead of lifting symbol versions from indirect dependencies of the objects listed on the command line. This is why there is no test case for this error. Reviewed-by: Adhemerval Zanella --- elf/dl-version.c | 18 +++++------------- 1 file changed, 5 insertions(+), 13 deletions(-) diff --git a/elf/dl-version.c b/elf/dl-version.c index d414bd1e18..0fae561e55 100644 --- a/elf/dl-version.c +++ b/elf/dl-version.c @@ -31,21 +31,17 @@ __attribute ((always_inline)) find_needed (const char *name, struct link_map *map) { struct link_map *tmap; - unsigned int n; for (tmap = GL(dl_ns)[map->l_ns]._ns_loaded; tmap != NULL; tmap = tmap->l_next) if (_dl_name_match_p (name, tmap)) return tmap; - /* The required object is not in the global scope, look to see if it is - a dependency of the current object. */ - for (n = 0; n < map->l_searchlist.r_nlist; n++) - if (_dl_name_match_p (name, map->l_searchlist.r_list[n])) - return map->l_searchlist.r_list[n]; - - /* Should never happen. */ - return NULL; + struct dl_exception exception; + _dl_exception_create_format + (&exception, DSO_FILENAME (map->l_name), + "missing soname %s in version dependency", name); + _dl_signal_exception (0, &exception, NULL); } @@ -199,10 +195,6 @@ _dl_check_map_versions (struct link_map *map, int verbose, int trace_mode) ElfW(Vernaux) *aux; struct link_map *needed = find_needed (strtab + ent->vn_file, map); - /* If NEEDED is NULL this means a dependency was not found - and no stub entry was created. This should never happen. */ - assert (needed != NULL); - /* Make sure this is no stub we created because of a missing dependency. */ if (__builtin_expect (! trace_mode, 1) From patchwork Sun Feb 2 21:13:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105879 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 5749C3858C41 for ; Sun, 2 Feb 2025 21:19:22 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 5749C3858C41 Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=jJpLsIA0 X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTP id C29FB3858C41 for ; Sun, 2 Feb 2025 21:13:24 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org C29FB3858C41 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org C29FB3858C41 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530804; cv=none; b=CrXdaF06WleLHULnI2/ml5tJi78QCZjWlNqRhz2xti5r/LxOFycwHEYLkkXGHHdWrjJeSMsrY6TAU0XQ86t40jJsXVAqV7leUh7PcpBv1WxeEmMIWXOKBUidlHcRMhsI/TKHq21XpInYrdQlwQqq6GOAE2odr2uf+oZQ3D4s3t8= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530804; c=relaxed/simple; bh=tkRUW/417rQlO2AlhbIy+s/GqBTych5J/4J7lQBl478=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=syeXXQPheTWlzz6CAHFR9usPRD5Xb04VRfL70uJANykqccrAQ72M7kVzkXydBru9cUxJxoHGsfPu9EXQk4LxR44ipdeg6xIYxl1h3q1BPTVlXZQH1oDHIIUYzljeAiC56EzIb1veDBmmfG+HZTtRZTOEssoLK7jDghIDYf9XXMI= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org C29FB3858C41 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530804; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=7RpKb03DlL9DGXCsQVpCs4FPFdoupWSsW7ZGt0QQ2QA=; b=jJpLsIA0+ywaVKvLTwT/FqsPZhJagH93QliuY+sBcQnnHenied5BegTqlfoYthZMoLoepq QmVsaLCgoMTaDNswza/r30ys+aTwAnZ8lSnG5Wnp+lAQZxknsG+kgAFaQ3VDLIYTgNEmyr LyahSD1vKhlTsCuJ934cumoLC+jQdks= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-470-3GVi_wnpN-mvcbtvVlvVuA-1; Sun, 02 Feb 2025 16:13:23 -0500 X-MC-Unique: 3GVi_wnpN-mvcbtvVlvVuA-1 X-Mimecast-MFC-AGG-ID: 3GVi_wnpN-mvcbtvVlvVuA Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 75978180034D for ; Sun, 2 Feb 2025 21:13:22 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id B2FE219560A3 for ; Sun, 2 Feb 2025 21:13:21 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 06/14] elf: Disambiguate some failures in _dl_load_cache_lookup In-Reply-To: Message-ID: References: X-From-Line: daed1beaf15e98ccd4670dd3e6123db8833928f1 Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:13:18 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: t7KdXz9hHQtG--ZFQU64KhRVd2O0zHSHdLaY8uukkxY_1738530802 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-10.8 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org Failure to allocate a copy of the string is now distinct from a cache lookup failure. Some infrastructure failures in _dl_sysdep_read_whole_file are still treated as cache lookup failures, though. Reviewed-by: Adhemerval Zanella --- elf/dl-cache.c | 42 ++++++++++++++++++++++++++------------ elf/dl-load.c | 5 ++++- sysdeps/generic/ldsodefs.h | 6 +++--- 3 files changed, 36 insertions(+), 17 deletions(-) diff --git a/elf/dl-cache.c b/elf/dl-cache.c index 300aa1b6dd..c9c5bf549a 100644 --- a/elf/dl-cache.c +++ b/elf/dl-cache.c @@ -375,15 +375,21 @@ _dl_cache_libcmp (const char *p1, const char *p2) } -/* Look up NAME in ld.so.cache and return the file name stored there, or null - if none is found. The cache is loaded if it was not already. If loading - the cache previously failed there will be no more attempts to load it. - The caller is responsible for freeing the returned string. The ld.so.cache - may be unmapped at any time by a completing recursive dlopen and - this function must take care that it does not return references to - any data in the mapping. */ -char * -_dl_load_cache_lookup (const char *name) +/* Look up NAME in ld.so.cache and write the file name stored there to + *REALNAME, or null if none is found, and return true. In this case, + the caller is responsible for freeing the string in *REALNAME. If + there is an error condition that causes the lookup to fail (such as + a failure to allocate memory), the function returns false, and + *REALNAME is unchanged. + + The cache is loaded if it was not already. If loading the cache + previously failed there will be no more attempts to load it. + + The ld.so.cache may be unmapped at any time by a completing + recursive dlopen and this function must take care that it does not + return references to any data in the mapping. */ +bool +_dl_load_cache_lookup (const char *name, char **realname) { /* Print a message if the loading of libs is traced. */ if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_LIBS)) @@ -459,8 +465,11 @@ _dl_load_cache_lookup (const char *name) } if (cache == (void *) -1) - /* Previously looked for the cache file and didn't find it. */ - return NULL; + { + /* Previously looked for the cache file and didn't find it. */ + *realname = NULL; + return true; + } const char *best; if (cache_new != (void *) -1) @@ -486,7 +495,10 @@ _dl_load_cache_lookup (const char *name) _dl_debug_printf (" trying file=%s\n", best); if (best == NULL) - return NULL; + { + *realname = NULL; + return true; + } /* The double copy is *required* since malloc may be interposed and call dlopen itself whose completion would unmap the data @@ -496,7 +508,11 @@ _dl_load_cache_lookup (const char *name) size_t best_len = strlen (best) + 1; temp = alloca (best_len); memcpy (temp, best, best_len); - return __strdup (temp); + char *copy = __strdup (temp); + if (copy == NULL) + return false; + *realname = copy; + return true; } #ifndef MAP_COPY diff --git a/elf/dl-load.c b/elf/dl-load.c index e328d678b4..b583714c96 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -2046,7 +2046,10 @@ _dl_map_new_object (struct link_map *loader, const char *name, { /* Check the list of libraries in the file /etc/ld.so.cache, for compatibility with Linux's ldconfig program. */ - char *cached = _dl_load_cache_lookup (name); + char *cached; + if (!_dl_load_cache_lookup (name, &cached)) + _dl_signal_error (ENOMEM, NULL, NULL, + N_("cannot allocate library name")); if (cached != NULL) { diff --git a/sysdeps/generic/ldsodefs.h b/sysdeps/generic/ldsodefs.h index 8465cbaa9b..c0785cba04 100644 --- a/sysdeps/generic/ldsodefs.h +++ b/sysdeps/generic/ldsodefs.h @@ -1117,9 +1117,9 @@ const struct r_strlenpair *_dl_important_hwcaps (const char *prepend, size_t *max_capstrlen) attribute_hidden; -/* Look up NAME in ld.so.cache and return the file name stored there, - or null if none is found. Caller must free returned string. */ -extern char *_dl_load_cache_lookup (const char *name) attribute_hidden; +/* Look up NAME in ld.so.cache. */ +bool _dl_load_cache_lookup (const char *name, char **realname) + attribute_hidden __nonnull ((1, 2)) __attribute__ ((warn_unused_result)); /* If the system does not support MAP_COPY we cannot leave the file open all the time since this would create problems when the file is replaced. From patchwork Sun Feb 2 21:13:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105878 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 5B3393858C54 for ; Sun, 2 Feb 2025 21:19:22 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 5B3393858C54 Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=XpbuZFpZ X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTP id 9A2F03858416 for ; Sun, 2 Feb 2025 21:13:30 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 9A2F03858416 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 9A2F03858416 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530810; cv=none; b=wDZgWqKmOiIECVHj5TmIxu0H/R/JmghXe0lRrjN4fhEh8utWtIjo2icQwNGvdutuAV0pMU6BFUxLVY61LSzjqDws7IrSiv7RjWFs0sB7UlQX0PQ2IXI2sopWxX/Y0kuMGIfCU7gJWyRMTxO3ny+KJjedjYjq+EYlrc/4kcPushM= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530810; c=relaxed/simple; bh=/EjD1EJMJop2QXw/wT9sZ0N+MRw3qAGZrlCy2ipHAxk=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=OXxJLYH8XrT9BNx0lOwWbMRRLxGfCnxRrBg3VXDPtYWin7BovhU1qF5Ph0y+KGfN19h9bJAD+o7qodA2a2iziaLz/JT3Y2I8VDTMb6mdlmZDrWiOgDEt8n/dD/4pFzoP6rDqcuXKwgIcLdXV8etmXfM6e8cbXYe0djHKZpDljXo= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 9A2F03858416 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530810; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=vwO6yboXXfV42wo1oaY6LhXx8vWO12pUBsEOXfNOv4g=; b=XpbuZFpZKTN+lbolllt/Fm/5tYSAGi/gkk9+CW1rzPhoAwUnW9HjAxzgRN4ekmAsfdzGgW 3iE4FCFA2bW963irrK7k+AnQV8+SmEfog8PcsdcZHsRkwhRj6svgC3fVYIMW7/UdJFC5bi i4jyygvgTjDf4FzP5fNhiLHm2G+wv+s= Received: from mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-461-hukOMv3TP_WZELWa0L-Tzg-1; Sun, 02 Feb 2025 16:13:28 -0500 X-MC-Unique: hukOMv3TP_WZELWa0L-Tzg-1 X-Mimecast-MFC-AGG-ID: hukOMv3TP_WZELWa0L-Tzg Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 1B1D71800360 for ; Sun, 2 Feb 2025 21:13:28 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 277251800268 for ; Sun, 2 Feb 2025 21:13:26 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 07/14] elf: Merge the three implementations of _dl_dst_substitute In-Reply-To: Message-ID: References: X-From-Line: accd109f7264467148600d3e793fa40989d8ca90 Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:13:24 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: yDwxMpzKej9h04dsarUhjgBm1ueLEO77k76vqT1YHZw_1738530808 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-11.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org Use one implementation to perform the copying and the counting. Use l_origin of the main program as a fallback for objects that have not been dlopen'ed (such as ld.so itself, or the vDSO). These should not end up as dlopen callers normally, but they might one day if a dlopen variant with an explicit caller argument is supported for application use. Reviewed-by: Adhemerval Zanella --- elf/dl-deps.c | 70 ++++++-------- elf/dl-dst.h | 56 ----------- elf/dl-load.c | 145 ++++++++++++---------------- elf/dl-open.c | 1 - elf/dl-origin.c | 3 - sysdeps/generic/ldsodefs.h | 9 +- sysdeps/unix/sysv/linux/dl-origin.c | 1 - 7 files changed, 93 insertions(+), 192 deletions(-) delete mode 100644 elf/dl-dst.h diff --git a/elf/dl-deps.c b/elf/dl-deps.c index 1504f8d606..3c8a5ebced 100644 --- a/elf/dl-deps.c +++ b/elf/dl-deps.c @@ -28,8 +28,7 @@ #include #include #include - -#include +#include /* Whether an shared object references one or more auxiliary objects is signaled by the AUXTAG entry in l_info. */ @@ -80,47 +79,34 @@ struct list }; -/* Macro to expand DST. It is an macro since we use `alloca'. */ +/* Macro to expand DST. It is an macro since we use `alloca'. + See expand_dynamic_string_token in dl-load.c. */ #define expand_dst(l, str, fatal) \ - ({ \ - const char *__str = (str); \ - const char *__result = __str; \ - size_t __dst_cnt = _dl_dst_count (__str); \ - \ - if (__dst_cnt != 0) \ - { \ - char *__newp; \ - \ - /* DST must not appear in SUID/SGID programs. */ \ - if (__libc_enable_secure) \ - _dl_signal_error (0, __str, NULL, N_("\ -DST not allowed in SUID/SGID programs")); \ - \ - __newp = (char *) alloca (DL_DST_REQUIRED (l, __str, strlen (__str), \ - __dst_cnt)); \ - \ - __result = _dl_dst_substitute (l, __str, __newp); \ - \ - if (*__result == '\0') \ - { \ - /* The replacement for the DST is not known. We can't \ - processed. */ \ - if (fatal) \ - _dl_signal_error (0, __str, NULL, N_("\ -empty dynamic string token substitution")); \ - else \ - { \ - /* This is for DT_AUXILIARY. */ \ - if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_LIBS)) \ - _dl_debug_printf (N_("\ -cannot load auxiliary `%s' because of empty dynamic string token " \ - "substitution\n"), __str); \ - continue; \ - } \ - } \ - } \ - \ - __result; }) + ({ \ + struct alloc_buffer __buf = {}; \ + size_t __size = _dl_dst_substitute ((l), (str), &__buf); \ + char *__result = alloca (__size); \ + __buf = alloc_buffer_create (__result, __size); \ + if (_dl_dst_substitute ((l), (str), &__buf) == 0) \ + { \ + /* The replacement for the DST is not known. We can't \ + processed. */ \ + if (fatal) \ + _dl_signal_error (0, str, NULL, N_("\ +empty dynamic string token substitution")); \ + else \ + { \ + /* This is for DT_AUXILIARY. */ \ + if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_LIBS)) \ + _dl_debug_printf (N_("\ +cannot load auxiliary `%s' because of empty dynamic string token " \ + "substitution\n"), str); \ + continue; \ + } \ + } \ + assert (!alloc_buffer_has_failed (&__buf)); \ + __result; \ + }) \ static void preload (struct list *known, unsigned int *nlist, struct link_map *map) diff --git a/elf/dl-dst.h b/elf/dl-dst.h deleted file mode 100644 index 50afd3ad93..0000000000 --- a/elf/dl-dst.h +++ /dev/null @@ -1,56 +0,0 @@ -/* Handling of dynamic string tokens. - Copyright (C) 1999-2025 Free Software Foundation, Inc. - This file is part of the GNU C Library. - - The GNU C Library is free software; you can redistribute it and/or - modify it under the terms of the GNU Lesser General Public - License as published by the Free Software Foundation; either - version 2.1 of the License, or (at your option) any later version. - - The GNU C Library is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - Lesser General Public License for more details. - - You should have received a copy of the GNU Lesser General Public - License along with the GNU C Library; if not, see - . */ - -#include "trusted-dirs.h" - -/* Guess from the number of DSTs the length of the result string. */ -#define DL_DST_REQUIRED(l, name, len, cnt) \ - ({ \ - size_t __len = (len); \ - size_t __cnt = (cnt); \ - \ - if (__cnt > 0) \ - { \ - size_t dst_len; \ - /* Now we make a guess how many extra characters on top of the \ - length of S we need to represent the result. We know that \ - we have CNT replacements. Each at most can use \ - MAX (MAX (strlen (ORIGIN), strlen (_dl_platform)), \ - strlen (DL_DST_LIB)) \ - minus 4 (which is the length of "$LIB"). \ - \ - First get the origin string if it is not available yet. \ - This can only happen for the map of the executable or, when \ - auditing, in ld.so. */ \ - if ((l)->l_origin == NULL) \ - { \ - assert ((l)->l_name[0] == '\0' || is_rtld_link_map (l)); \ - (l)->l_origin = _dl_get_origin (); \ - dst_len = ((l)->l_origin && (l)->l_origin != (char *) -1 \ - ? strlen ((l)->l_origin) : 0); \ - } \ - else \ - dst_len = (l)->l_origin == (char *) -1 \ - ? 0 : strlen ((l)->l_origin); \ - dst_len = MAX (MAX (dst_len, GLRO(dl_platformlen)), \ - strlen (DL_DST_LIB)); \ - if (dst_len > 4) \ - __len += __cnt * (dst_len - 4); \ - } \ - \ - __len; }) diff --git a/elf/dl-load.c b/elf/dl-load.c index b583714c96..2e6c58dfcc 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -33,6 +33,7 @@ #include #include #include +#include /* Type for the buffer we put the ELF header and hopefully the program header. This buffer does not really have to be too large. In most @@ -68,7 +69,6 @@ struct filebuf #include #include -#include #include #include #include @@ -218,61 +218,32 @@ is_dst (const char *input, const char *ref) return rlen; } -/* INPUT should be the start of a path e.g DT_RPATH or name e.g. - DT_NEEDED. The return value is the number of known DSTs found. We - count all known DSTs regardless of __libc_enable_secure; the caller - is responsible for enforcing the security of the substitution rules - (usually _dl_dst_substitute). */ -size_t -_dl_dst_count (const char *input) -{ - size_t cnt = 0; - - input = strchr (input, '$'); - - /* Most likely there is no DST. */ - if (__glibc_likely (input == NULL)) - return 0; - - do - { - size_t len; - - ++input; - /* All DSTs must follow ELF gABI rules, see is_dst (). */ - if ((len = is_dst (input, "ORIGIN")) != 0 - || (len = is_dst (input, "PLATFORM")) != 0 - || (len = is_dst (input, "LIB")) != 0) - ++cnt; - - /* There may be more than one DST in the input. */ - input = strchr (input + len, '$'); - } - while (input != NULL); - - return cnt; -} - -/* Process INPUT for DSTs and store in RESULT using the information +/* Process INPUT for DSTs and store in *RESULT using the information from link map L to resolve the DSTs. This function only handles one path at a time and does not handle colon-separated path lists (see - fillin_rpath ()). Lastly the size of result in bytes should be at - least equal to the value returned by DL_DST_REQUIRED. Note that it - is possible for a DT_NEEDED, DT_AUXILIARY, and DT_FILTER entries to - have colons, but we treat those as literal colons here, not as path - list delimiters. */ -char * -_dl_dst_substitute (struct link_map *l, const char *input, char *result) + fillin_rpath ()). + + A caller is expected to call this function twice, first with an + empty *RESULT buffer to obtain the total length (including the + terminating null byte) that is returned by this function. The + second call should be made with a properly sized buffer, and this + function will write the expansion to *RESULT. If that second call + returns 0, it means that the expansion is not valid and should be + ignored. + + Note that it is possible for a DT_NEEDED, + DT_AUXILIARY, and DT_FILTER entries to have colons, but we treat + those as literal colons here, not as path list delimiters. */ +size_t +_dl_dst_substitute (struct link_map *l, const char *input, + struct alloc_buffer *result) { /* Copy character-by-character from input into the working pointer - looking for any DSTs. We track the start of input and if we are - going to check for trusted paths, all of which are part of $ORIGIN - handling in SUID/SGID cases (see below). In some cases, like when - a DST cannot be replaced, we may set result to an empty string and - return. */ - char *wp = result; + looking for any DSTs. */ const char *start = input; + char *result_start = alloc_buffer_next (result, char); bool check_for_trusted = false; + size_t length = 0; do { @@ -309,7 +280,15 @@ _dl_dst_substitute (struct link_map *l, const char *input, char *result) && (input[len] == '\0' || input[len] == '/'))) repl = (const char *) -1; else - repl = l->l_origin; + { + if (l->l_origin == NULL) + /* For loaded DSOs, the l_origin field is set in + _dl_new_object. If pre-loaded DSOs end up as + the dlopen caller, use the main program for + obtaining the origin. */ + l->l_origin = _dl_get_origin (); + repl = l->l_origin; + } check_for_trusted = (__libc_enable_secure && l->l_type == lt_executable); @@ -321,7 +300,9 @@ _dl_dst_substitute (struct link_map *l, const char *input, char *result) if (repl != NULL && repl != (const char *) -1) { - wp = __stpcpy (wp, repl); + size_t repl_len = strlen (repl); + length += repl_len; + alloc_buffer_copy_bytes (result, repl, repl_len); input += len; } else if (len != 0) @@ -329,16 +310,20 @@ _dl_dst_substitute (struct link_map *l, const char *input, char *result) /* We found a valid DST that we know about, but we could not find a replacement value for it, therefore we cannot use this path and discard it. */ - *result = '\0'; - return result; + alloc_buffer_mark_failed (result); + return 0; } else - /* No DST we recognize. */ - *wp++ = '$'; + { + /* No DST we recognize. */ + ++length; + alloc_buffer_add_byte (result, '$'); + } } else { - *wp++ = *input++; + ++length; + alloc_buffer_add_byte (result, *input++); } } while (*input != '\0'); @@ -353,15 +338,19 @@ _dl_dst_substitute (struct link_map *l, const char *input, char *result) this way because it may be manipulated in some ways with hard links. */ if (__glibc_unlikely (check_for_trusted) - && !is_trusted_path_normalize (result, wp - result)) + && !alloc_buffer_has_failed (result) + && !is_trusted_path_normalize (result_start, + alloc_buffer_next (result, char) + - result_start)) { - *result = '\0'; - return result; + alloc_buffer_mark_failed (result); + return 0; } - *wp = '\0'; + ++length; + alloc_buffer_add_byte (result, 0); - return result; + return length; } @@ -373,30 +362,18 @@ _dl_dst_substitute (struct link_map *l, const char *input, char *result) static char * expand_dynamic_string_token (struct link_map *l, const char *input) { - /* We make two runs over the string. First we determine how large the - resulting string is and then we copy it over. Since this is no - frequently executed operation we are looking here not for performance - but rather for code size. */ - size_t cnt; - size_t total; - char *result; - - /* Determine the number of DSTs. */ - cnt = _dl_dst_count (input); - - /* If we do not have to replace anything simply copy the string. */ - if (__glibc_likely (cnt == 0)) - return __strdup (input); - - /* Determine the length of the substituted string. */ - total = DL_DST_REQUIRED (l, input, strlen (input), cnt); - - /* Allocate the necessary memory. */ - result = (char *) malloc (total + 1); + struct alloc_buffer buf = {}; + size_t size = _dl_dst_substitute (l, input, &buf); + char *result = malloc (size); if (result == NULL) return NULL; - - return _dl_dst_substitute (l, input, result); + buf = alloc_buffer_create (result, size); + if (_dl_dst_substitute (l, input, &buf) == 0) + /* Mark the expanded string as to be ignored. */ + *result = '\0'; + else + assert (!alloc_buffer_has_failed (&buf)); + return result; } diff --git a/elf/dl-open.c b/elf/dl-open.c index 60a1dce9de..4fb77e3ff7 100644 --- a/elf/dl-open.c +++ b/elf/dl-open.c @@ -38,7 +38,6 @@ #include #include -#include #include diff --git a/elf/dl-origin.c b/elf/dl-origin.c index 9f6b921b01..5d06f5bbe3 100644 --- a/elf/dl-origin.c +++ b/elf/dl-origin.c @@ -21,9 +21,6 @@ #include #include -#include - - const char * _dl_get_origin (void) { diff --git a/sysdeps/generic/ldsodefs.h b/sysdeps/generic/ldsodefs.h index c0785cba04..e8418973ed 100644 --- a/sysdeps/generic/ldsodefs.h +++ b/sysdeps/generic/ldsodefs.h @@ -1223,12 +1223,11 @@ extern struct link_map * _dl_get_dl_main_map (void) attribute_hidden; /* Find origin of the executable. */ extern const char *_dl_get_origin (void) attribute_hidden; -/* Count DSTs. */ -extern size_t _dl_dst_count (const char *name) attribute_hidden; - /* Substitute DST values. */ -extern char *_dl_dst_substitute (struct link_map *l, const char *name, - char *result) attribute_hidden; +struct alloc_buffer; +size_t _dl_dst_substitute (struct link_map *l, const char *name, + struct alloc_buffer *result) + attribute_hidden __nonnull ((1, 2, 3)); /* Open the shared object NAME, relocate it, and run its initializer if it hasn't already been run. MODE is as for `dlopen' (see ). If diff --git a/sysdeps/unix/sysv/linux/dl-origin.c b/sysdeps/unix/sysv/linux/dl-origin.c index decdd8ae9e..9c87ca3208 100644 --- a/sysdeps/unix/sysv/linux/dl-origin.c +++ b/sysdeps/unix/sysv/linux/dl-origin.c @@ -17,7 +17,6 @@ . */ #include -#include #include #include #include From patchwork Sun Feb 2 21:13:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105884 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 62FF1385840B for ; Sun, 2 Feb 2025 21:23:36 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 62FF1385840B Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=Atmu/o9X X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTP id F3CA63858C56 for ; Sun, 2 Feb 2025 21:13:37 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org F3CA63858C56 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org F3CA63858C56 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530818; cv=none; b=JYRpZoLnL5YIryIZwzKz5Far6giksZLlAL0PYARnIxNtb+pEVlyT/D8pTVkll5LpCHEUAwQ4vZJlLbzCC80KEVZJKSHUEOdUPy0UkRDHitrTDS40b5UmNaEtoziJeP50BNmCF0IspN+v9yl3tb0/86e4L8b544iR71du36h3Nb8= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530818; c=relaxed/simple; bh=5bEaAnFmV667poNxQclvDbDzwHoFyBVaF2BqX3VXZLM=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=WpGULM5fMWBblkJ77mKZTp85L+ISYTrWr8zRSOfEmlIbKuXCryKUK60y/SORBHWc6cJew+tdZBQsAS8EPe8xZbHANqGTTTwFgSOxavSc5asxaQJJjxXMYmGnc+6mBOHJoY/qWP9r3MHQo0fYMmnUbJJGTUaxUKQQswFatPsjSFU= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org F3CA63858C56 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530817; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=noX1Erzfx6nYk3flWo7Bv6IN7TaqXOjlNN6A1bsKqXw=; b=Atmu/o9XF8qQtmFy8wpzkxfvpF3bvDjlk07SMiD4gBq90BMlp/7eh3mylf8BmQMIFkgHGP +Tzs1n+WAkGBFcJr/D4EHv0s3al7KNa/Cq1AwPB3RY2oogSf+RloCa5O01G1W6Em3/8Vmp motLYLuwgTCGfhXMoTG/9FqjZ+hTmBo= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-147-BV0NTxq6PtmOnhVr2eJoSQ-1; Sun, 02 Feb 2025 16:13:36 -0500 X-MC-Unique: BV0NTxq6PtmOnhVr2eJoSQ-1 X-Mimecast-MFC-AGG-ID: BV0NTxq6PtmOnhVr2eJoSQ Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 5EB1E19560AA for ; Sun, 2 Feb 2025 21:13:35 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 648A31800358 for ; Sun, 2 Feb 2025 21:13:33 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 08/14] elf: Remove run-time-writable fields from struct link_map In-Reply-To: Message-ID: <7cb2d0a3141731135bd366e43061b26f65c0d54d.1738530302.git.fweimer@redhat.com> References: X-From-Line: 7cb2d0a3141731135bd366e43061b26f65c0d54d Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:13:30 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: o7t8kf0TCI4qF5_KtnZ_gg0xwjG99NRf0_Hg73ftnTY_1738530815 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-11.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org And introduce struct link_map_rw. These fields are written during run-time relocation (for lazy binding) or during dlopen, so they are difficult to handle efficiently with otherwise read-only link maps. Moving them into a separate allocation makes it possible to keep the read-write while the rest of the link map is read-only. Global-dynamic TLS is lazily initialized and may therefore write to the l_tls_offset field. The code does not acquire the main loader lock, so this field has to be moved to the read-write space. It can be moved back to the main link map once global-dynamic TLS no longer uses lazy initialization. Auditors can write to the cookie member, so it has to remain read-write even if other parts of the link map are write-protected. Allocation of the l_rw part of the rtld link map is changed so that the auditor states come immediately after it, just as for other link maps. The dynamic linker re-runs dependency sorting during process shutdown in _dl_fini, instead of simply using the reverse initialization order. (This is required for compatibility with existing applications.) This means that the l_idx and l_visited fields are written to. There is no way to report errors during shutdown. If these fields are always writable, this avoids the need to make link maps writable during _dl_fini, avoiding the error reporting issue. --- elf/circleload1.c | 3 +- elf/dl-call_fini.c | 2 +- elf/dl-close.c | 106 +++++++++++---------- elf/dl-deps.c | 14 +-- elf/dl-find_object.c | 2 +- elf/dl-fini.c | 8 +- elf/dl-init.c | 4 +- elf/dl-lookup.c | 42 ++++----- elf/dl-object.c | 17 ++-- elf/dl-open.c | 29 +++--- elf/dl-reloc.c | 15 +-- elf/dl-sort-maps.c | 26 +++--- elf/dl-static-tls.h | 8 +- elf/dl-support.c | 2 +- elf/dl-tls.c | 37 ++++---- elf/get-dynamic-info.h | 2 +- elf/loadtest.c | 4 +- elf/neededtest.c | 3 +- elf/neededtest2.c | 3 +- elf/neededtest3.c | 3 +- elf/neededtest4.c | 3 +- elf/rtld.c | 19 ++-- elf/tst-tls_tp_offset.c | 3 +- elf/unload.c | 2 +- elf/unload2.c | 2 +- htl/pt-alloc.c | 5 +- include/link.h | 123 +++++++++++++++---------- nptl/Versions | 3 +- nptl_db/db_info.c | 1 + nptl_db/structs.def | 3 +- nptl_db/td_thr_tlsbase.c | 12 ++- stdlib/cxa_thread_atexit_impl.c | 4 +- sysdeps/aarch64/dl-machine.h | 5 +- sysdeps/alpha/dl-machine.h | 4 +- sysdeps/arc/dl-machine.h | 3 +- sysdeps/arm/dl-machine.h | 4 +- sysdeps/csky/dl-machine.h | 2 +- sysdeps/generic/ldsodefs.h | 12 +-- sysdeps/hppa/dl-machine.h | 3 +- sysdeps/i386/dl-machine.h | 11 ++- sysdeps/loongarch/dl-tls.h | 2 +- sysdeps/m68k/dl-tls.h | 2 +- sysdeps/microblaze/dl-machine.h | 3 +- sysdeps/mips/dl-tls.h | 2 +- sysdeps/or1k/dl-machine.h | 4 +- sysdeps/powerpc/dl-tls.h | 2 +- sysdeps/powerpc/powerpc32/dl-machine.h | 4 +- sysdeps/powerpc/powerpc64/dl-machine.h | 4 +- sysdeps/riscv/dl-tls.h | 2 +- sysdeps/s390/s390-32/dl-machine.h | 5 +- sysdeps/s390/s390-64/dl-machine.h | 5 +- sysdeps/sh/dl-machine.h | 7 +- sysdeps/sparc/sparc32/dl-machine.h | 4 +- sysdeps/sparc/sparc64/dl-machine.h | 4 +- sysdeps/x86/dl-prop.h | 2 +- sysdeps/x86_64/dl-machine.h | 5 +- 56 files changed, 337 insertions(+), 274 deletions(-) diff --git a/elf/circleload1.c b/elf/circleload1.c index 990ff84a84..eeaeb3b8d7 100644 --- a/elf/circleload1.c +++ b/elf/circleload1.c @@ -29,7 +29,8 @@ check_loaded_objects (const char **loaded) for (lm = MAPS; lm; lm = lm->l_next) { if (lm->l_name && lm->l_name[0]) - printf(" %s, count = %d\n", lm->l_name, (int) lm->l_direct_opencount); + printf(" %s, count = %d\n", lm->l_name, + (int) lm->l_rw->l_direct_opencount); if (lm->l_type == lt_loaded && lm->l_name) { int match = 0; diff --git a/elf/dl-call_fini.c b/elf/dl-call_fini.c index 950744cb3d..8ee2724453 100644 --- a/elf/dl-call_fini.c +++ b/elf/dl-call_fini.c @@ -29,7 +29,7 @@ _dl_call_fini (void *closure_map) _dl_debug_printf ("\ncalling fini: %s [%lu]\n\n", map->l_name, map->l_ns); /* Make sure nothing happens if we are called twice. */ - map->l_init_called = 0; + map->l_rw->l_init_called = 0; ElfW(Dyn) *fini_array = map->l_info[DT_FINI_ARRAY]; if (fini_array != NULL) diff --git a/elf/dl-close.c b/elf/dl-close.c index 47bd3dab81..3169ad03bd 100644 --- a/elf/dl-close.c +++ b/elf/dl-close.c @@ -109,23 +109,23 @@ void _dl_close_worker (struct link_map *map, bool force) { /* One less direct use. */ - --map->l_direct_opencount; + --map->l_rw->l_direct_opencount; /* If _dl_close is called recursively (some destructor call dlclose), just record that the parent _dl_close will need to do garbage collection again and return. */ static enum { not_pending, pending, rerun } dl_close_state; - if (map->l_direct_opencount > 0 || map->l_type != lt_loaded + if (map->l_rw->l_direct_opencount > 0 || map->l_type != lt_loaded || dl_close_state != not_pending) { - if (map->l_direct_opencount == 0 && map->l_type == lt_loaded) + if (map->l_rw->l_direct_opencount == 0 && map->l_type == lt_loaded) dl_close_state = rerun; /* There are still references to this object. Do nothing more. */ if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_FILES)) _dl_debug_printf ("\nclosing file=%s; direct_opencount=%u\n", - map->l_name, map->l_direct_opencount); + map->l_name, map->l_rw->l_direct_opencount); return; } @@ -147,7 +147,7 @@ _dl_close_worker (struct link_map *map, bool force) { l->l_map_used = 0; l->l_map_done = 0; - l->l_idx = idx; + l->l_rw->l_idx = idx; maps[idx] = l; ++idx; } @@ -157,10 +157,10 @@ _dl_close_worker (struct link_map *map, bool force) The map variable is NULL after a retry. */ if (map != NULL) { - maps[map->l_idx] = maps[0]; - maps[map->l_idx]->l_idx = map->l_idx; + maps[map->l_rw->l_idx] = maps[0]; + maps[map->l_rw->l_idx]->l_rw->l_idx = map->l_rw->l_idx; maps[0] = map; - maps[0]->l_idx = 0; + maps[0]->l_rw->l_idx = 0; } /* Keep track of the lowest index link map we have covered already. */ @@ -175,11 +175,11 @@ _dl_close_worker (struct link_map *map, bool force) /* Check whether this object is still used. */ if (l->l_type == lt_loaded - && l->l_direct_opencount == 0 - && !l->l_nodelete_active + && l->l_rw->l_direct_opencount == 0 + && !l->l_rw->l_nodelete_active /* See CONCURRENCY NOTES in cxa_thread_atexit_impl.c to know why acquire is sufficient and correct. */ - && atomic_load_acquire (&l->l_tls_dtor_count) == 0 + && atomic_load_acquire (&l->l_rw->l_tls_dtor_count) == 0 && !l->l_map_used) continue; @@ -187,7 +187,7 @@ _dl_close_worker (struct link_map *map, bool force) l->l_map_used = 1; l->l_map_done = 1; /* Signal the object is still needed. */ - l->l_idx = IDX_STILL_USED; + l->l_rw->l_idx = IDX_STILL_USED; /* Mark all dependencies as used. */ if (l->l_initfini != NULL) @@ -197,9 +197,10 @@ _dl_close_worker (struct link_map *map, bool force) struct link_map **lp = &l->l_initfini[1]; while (*lp != NULL) { - if ((*lp)->l_idx != IDX_STILL_USED) + if ((*lp)->l_rw->l_idx != IDX_STILL_USED) { - assert ((*lp)->l_idx >= 0 && (*lp)->l_idx < nloaded); + assert ((*lp)->l_rw->l_idx >= 0 + && (*lp)->l_rw->l_idx < nloaded); if (!(*lp)->l_map_used) { @@ -208,8 +209,8 @@ _dl_close_worker (struct link_map *map, bool force) already processed it, then we need to go back and process again from that point forward to ensure we keep all of its dependencies also. */ - if ((*lp)->l_idx - 1 < done_index) - done_index = (*lp)->l_idx - 1; + if ((*lp)->l_rw->l_idx - 1 < done_index) + done_index = (*lp)->l_rw->l_idx - 1; } } @@ -217,20 +218,20 @@ _dl_close_worker (struct link_map *map, bool force) } } /* And the same for relocation dependencies. */ - if (l->l_reldeps != NULL) - for (unsigned int j = 0; j < l->l_reldeps->act; ++j) + if (l->l_rw->l_reldeps != NULL) + for (unsigned int j = 0; j < l->l_rw->l_reldeps->act; ++j) { - struct link_map *jmap = l->l_reldeps->list[j]; + struct link_map *jmap = l->l_rw->l_reldeps->list[j]; - if (jmap->l_idx != IDX_STILL_USED) + if (jmap->l_rw->l_idx != IDX_STILL_USED) { - assert (jmap->l_idx >= 0 && jmap->l_idx < nloaded); + assert (jmap->l_rw->l_idx >= 0 && jmap->l_rw->l_idx < nloaded); if (!jmap->l_map_used) { jmap->l_map_used = 1; - if (jmap->l_idx - 1 < done_index) - done_index = jmap->l_idx - 1; + if (jmap->l_rw->l_idx - 1 < done_index) + done_index = jmap->l_rw->l_idx - 1; } } } @@ -255,12 +256,12 @@ _dl_close_worker (struct link_map *map, bool force) if (!imap->l_map_used) { - assert (imap->l_type == lt_loaded && !imap->l_nodelete_active); + assert (imap->l_type == lt_loaded && !imap->l_rw->l_nodelete_active); /* Call its termination function. Do not do it for half-cooked objects. Temporarily disable exception handling, so that errors are fatal. */ - if (imap->l_init_called) + if (imap->l_rw->l_init_called) _dl_catch_exception (NULL, _dl_call_fini, imap); #ifdef SHARED @@ -327,7 +328,7 @@ _dl_close_worker (struct link_map *map, bool force) ((char *) imap->l_scope[cnt] - offsetof (struct link_map, l_searchlist)); assert (tmap->l_ns == nsid); - if (tmap->l_idx == IDX_STILL_USED) + if (tmap->l_rw->l_idx == IDX_STILL_USED) ++remain; else removed_any = true; @@ -372,7 +373,7 @@ _dl_close_worker (struct link_map *map, bool force) struct link_map *tmap = (struct link_map *) ((char *) imap->l_scope[cnt] - offsetof (struct link_map, l_searchlist)); - if (tmap->l_idx != IDX_STILL_USED) + if (tmap->l_rw->l_idx != IDX_STILL_USED) { /* Remove the scope. Or replace with own map's scope. */ @@ -417,7 +418,7 @@ _dl_close_worker (struct link_map *map, bool force) /* The loader is gone, so mark the object as not having one. Note: l_idx != IDX_STILL_USED -> object will be removed. */ if (imap->l_loader != NULL - && imap->l_loader->l_idx != IDX_STILL_USED) + && imap->l_loader->l_rw->l_idx != IDX_STILL_USED) imap->l_loader = NULL; /* Remember where the first dynamically loaded object is. */ @@ -507,14 +508,14 @@ _dl_close_worker (struct link_map *map, bool force) if (GL(dl_tls_dtv_slotinfo_list) != NULL && ! remove_slotinfo (imap->l_tls_modid, GL(dl_tls_dtv_slotinfo_list), 0, - imap->l_init_called)) + imap->l_rw->l_init_called)) /* All dynamically loaded modules with TLS are unloaded. */ /* Can be read concurrently. */ atomic_store_relaxed (&GL(dl_tls_max_dtv_idx), GL(dl_tls_static_nelem)); - if (imap->l_tls_offset != NO_TLS_OFFSET - && imap->l_tls_offset != FORCED_DYNAMIC_TLS_OFFSET) + if (imap->l_rw->l_tls_offset != NO_TLS_OFFSET + && imap->l_rw->l_tls_offset != FORCED_DYNAMIC_TLS_OFFSET) { /* Collect a contiguous chunk built from the objects in this search list, going in either direction. When the @@ -522,19 +523,19 @@ _dl_close_worker (struct link_map *map, bool force) reclaim it. */ #if TLS_TCB_AT_TP if (tls_free_start == NO_TLS_OFFSET - || (size_t) imap->l_tls_offset == tls_free_start) + || (size_t) imap->l_rw->l_tls_offset == tls_free_start) { /* Extend the contiguous chunk being reclaimed. */ tls_free_start - = imap->l_tls_offset - imap->l_tls_blocksize; + = imap->l_rw->l_tls_offset - imap->l_tls_blocksize; if (tls_free_end == NO_TLS_OFFSET) - tls_free_end = imap->l_tls_offset; + tls_free_end = imap->l_rw->l_tls_offset; } - else if (imap->l_tls_offset - imap->l_tls_blocksize + else if (imap->l_rw->l_tls_offset - imap->l_tls_blocksize == tls_free_end) /* Extend the chunk backwards. */ - tls_free_end = imap->l_tls_offset; + tls_free_end = imap->l_rw->l_tls_offset; else { /* This isn't contiguous with the last chunk freed. @@ -543,19 +544,20 @@ _dl_close_worker (struct link_map *map, bool force) if (tls_free_end == GL(dl_tls_static_used)) { GL(dl_tls_static_used) = tls_free_start; - tls_free_end = imap->l_tls_offset; + tls_free_end = imap->l_rw->l_tls_offset; tls_free_start = tls_free_end - imap->l_tls_blocksize; } - else if ((size_t) imap->l_tls_offset + else if ((size_t) imap->l_rw->l_tls_offset == GL(dl_tls_static_used)) GL(dl_tls_static_used) - = imap->l_tls_offset - imap->l_tls_blocksize; - else if (tls_free_end < (size_t) imap->l_tls_offset) + = imap->l_rw->l_tls_offset - imap->l_tls_blocksize; + else if (tls_free_end + < (size_t) imap->l_rw->l_tls_offset) { /* We pick the later block. It has a chance to be freed. */ - tls_free_end = imap->l_tls_offset; + tls_free_end = imap->l_rw->l_tls_offset; tls_free_start = tls_free_end - imap->l_tls_blocksize; } @@ -564,34 +566,37 @@ _dl_close_worker (struct link_map *map, bool force) if (tls_free_start == NO_TLS_OFFSET) { tls_free_start = imap->l_tls_firstbyte_offset; - tls_free_end = (imap->l_tls_offset + tls_free_end = (imap->l_rw->l_tls_offset + imap->l_tls_blocksize); } else if (imap->l_tls_firstbyte_offset == tls_free_end) /* Extend the contiguous chunk being reclaimed. */ - tls_free_end = imap->l_tls_offset + imap->l_tls_blocksize; - else if (imap->l_tls_offset + imap->l_tls_blocksize + tls_free_end = (imap->l_rw->l_tls_offset + + imap->l_tls_blocksize); + else if (imap->l_rw->l_tls_offset + imap->l_tls_blocksize == tls_free_start) /* Extend the chunk backwards. */ tls_free_start = imap->l_tls_firstbyte_offset; /* This isn't contiguous with the last chunk freed. One of them will be leaked unless we can free one block right away. */ - else if (imap->l_tls_offset + imap->l_tls_blocksize + else if (imap->l_rw->l_tls_offset + imap->l_tls_blocksize == GL(dl_tls_static_used)) GL(dl_tls_static_used) = imap->l_tls_firstbyte_offset; else if (tls_free_end == GL(dl_tls_static_used)) { GL(dl_tls_static_used) = tls_free_start; tls_free_start = imap->l_tls_firstbyte_offset; - tls_free_end = imap->l_tls_offset + imap->l_tls_blocksize; + tls_free_end = (imap->l_rw->l_tls_offset + + imap->l_tls_blocksize); } else if (tls_free_end < imap->l_tls_firstbyte_offset) { /* We pick the later block. It has a chance to be freed. */ tls_free_start = imap->l_tls_firstbyte_offset; - tls_free_end = imap->l_tls_offset + imap->l_tls_blocksize; + tls_free_end = (imap->l_rw->l_tls_offset + + imap->l_tls_blocksize); } #else # error "Either TLS_TCB_AT_TP or TLS_DTV_AT_TP must be defined" @@ -663,7 +668,8 @@ _dl_close_worker (struct link_map *map, bool force) if (imap->l_origin != (char *) -1) free ((char *) imap->l_origin); - free (imap->l_reldeps); + free (imap->l_rw->l_reldeps); + free (imap->l_rw); /* Print debugging message. */ if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_FILES)) @@ -769,7 +775,7 @@ _dl_close (void *_map) before we took the lock. There is no way to detect this (see below) so we proceed assuming this isn't the case. First see whether we can remove the object at all. */ - if (__glibc_unlikely (map->l_nodelete_active)) + if (__glibc_unlikely (map->l_rw->l_nodelete_active)) { /* Nope. Do nothing. */ __rtld_lock_unlock_recursive (GL(dl_load_lock)); @@ -786,7 +792,7 @@ _dl_close (void *_map) should be a detectable case and given that dlclose should be threadsafe we need this to be a reliable detection. This is bug 20990. */ - if (__builtin_expect (map->l_direct_opencount, 1) == 0) + if (__builtin_expect (map->l_rw->l_direct_opencount, 1) == 0) { __rtld_lock_unlock_recursive (GL(dl_load_lock)); _dl_signal_error (0, map->l_name, NULL, N_("shared object not open")); diff --git a/elf/dl-deps.c b/elf/dl-deps.c index 3c8a5ebced..3235b1d462 100644 --- a/elf/dl-deps.c +++ b/elf/dl-deps.c @@ -478,20 +478,20 @@ _dl_map_object_deps (struct link_map *map, /* Maybe we can remove some relocation dependencies now. */ struct link_map_reldeps *l_reldeps = NULL; - if (map->l_reldeps != NULL) + if (map->l_rw->l_reldeps != NULL) { for (i = 0; i < nlist; ++i) map->l_searchlist.r_list[i]->l_reserved = 1; /* Avoid removing relocation dependencies of the main binary. */ map->l_reserved = 0; - struct link_map **list = &map->l_reldeps->list[0]; - for (i = 0; i < map->l_reldeps->act; ++i) + struct link_map **list = &map->l_rw->l_reldeps->list[0]; + for (i = 0; i < map->l_rw->l_reldeps->act; ++i) if (list[i]->l_reserved) { /* Need to allocate new array of relocation dependencies. */ l_reldeps = malloc (sizeof (*l_reldeps) - + map->l_reldepsmax + + map->l_rw->l_reldepsmax * sizeof (struct link_map *)); if (l_reldeps == NULL) /* Bad luck, keep the reldeps duplicated between @@ -502,7 +502,7 @@ _dl_map_object_deps (struct link_map *map, unsigned int j = i; memcpy (&l_reldeps->list[0], &list[0], i * sizeof (struct link_map *)); - for (i = i + 1; i < map->l_reldeps->act; ++i) + for (i = i + 1; i < map->l_rw->l_reldeps->act; ++i) if (!list[i]->l_reserved) l_reldeps->list[j++] = list[i]; l_reldeps->act = j; @@ -547,8 +547,8 @@ _dl_map_object_deps (struct link_map *map, if (l_reldeps != NULL) { atomic_write_barrier (); - void *old_l_reldeps = map->l_reldeps; - map->l_reldeps = l_reldeps; + void *old_l_reldeps = map->l_rw->l_reldeps; + map->l_rw->l_reldeps = l_reldeps; _dl_scope_free (old_l_reldeps); } if (old_l_initfini != NULL) diff --git a/elf/dl-find_object.c b/elf/dl-find_object.c index 1e76373292..d8d09ffe0b 100644 --- a/elf/dl-find_object.c +++ b/elf/dl-find_object.c @@ -508,7 +508,7 @@ _dlfo_process_initial (void) if (l != main_map && l == l->l_real) { /* lt_library link maps are implicitly NODELETE. */ - if (l->l_type == lt_library || l->l_nodelete_active) + if (l->l_type == lt_library || l->l_rw->l_nodelete_active) { if (_dlfo_nodelete_mappings != NULL) /* Second pass only. */ diff --git a/elf/dl-fini.c b/elf/dl-fini.c index 3add4f77c1..3f3848ee89 100644 --- a/elf/dl-fini.c +++ b/elf/dl-fini.c @@ -78,12 +78,12 @@ _dl_fini (void) assert (i < nloaded); maps[i] = l; - l->l_idx = i; + l->l_rw->l_idx = i; ++i; /* Bump l_direct_opencount of all objects so that they are not dlclose()ed from underneath us. */ - ++l->l_direct_opencount; + ++l->l_rw->l_direct_opencount; } else /* Used below to call la_objclose for the ld.so proxy @@ -115,7 +115,7 @@ _dl_fini (void) { struct link_map *l = maps[i]; - if (l->l_init_called) + if (l->l_rw->l_init_called) { _dl_call_fini (l); #ifdef SHARED @@ -125,7 +125,7 @@ _dl_fini (void) } /* Correct the previous increment. */ - --l->l_direct_opencount; + --l->l_rw->l_direct_opencount; } if (proxy_link_map != NULL) diff --git a/elf/dl-init.c b/elf/dl-init.c index 2271208e68..ad82a47d75 100644 --- a/elf/dl-init.c +++ b/elf/dl-init.c @@ -34,13 +34,13 @@ call_init (struct link_map *l, int argc, char **argv, char **env) need relocation.) */ assert (l->l_relocated || l->l_type == lt_executable); - if (l->l_init_called) + if (l->l_rw->l_init_called) /* This object is all done. */ return; /* Avoid handling this constructor again in case we have a circular dependency. */ - l->l_init_called = 1; + l->l_rw->l_init_called = 1; /* Check for object which constructors we do not run here. */ if (__builtin_expect (l->l_name[0], 'a') == '\0' diff --git a/elf/dl-lookup.c b/elf/dl-lookup.c index ece647f009..415c4f3c78 100644 --- a/elf/dl-lookup.c +++ b/elf/dl-lookup.c @@ -175,9 +175,9 @@ static void mark_nodelete (struct link_map *map, int flags) { if (flags & DL_LOOKUP_FOR_RELOCATE) - map->l_nodelete_pending = true; + map->l_rw->l_nodelete_pending = true; else - map->l_nodelete_active = true; + map->l_rw->l_nodelete_active = true; } /* Return true if MAP is marked as NODELETE according to the lookup @@ -187,8 +187,8 @@ is_nodelete (struct link_map *map, int flags) { /* Non-pending NODELETE always counts. Pending NODELETE only counts during initial relocation processing. */ - return map->l_nodelete_active - || ((flags & DL_LOOKUP_FOR_RELOCATE) && map->l_nodelete_pending); + return map->l_rw->l_nodelete_active + || ((flags & DL_LOOKUP_FOR_RELOCATE) && map->l_rw->l_nodelete_pending); } /* Utility function for do_lookup_x. Lookup an STB_GNU_UNIQUE symbol @@ -532,7 +532,7 @@ add_dependency (struct link_map *undef_map, struct link_map *map, int flags) return 0; struct link_map_reldeps *l_reldeps - = atomic_forced_read (undef_map->l_reldeps); + = atomic_forced_read (undef_map->l_rw->l_reldeps); /* Make sure l_reldeps is read before l_initfini. */ atomic_read_barrier (); @@ -591,22 +591,22 @@ add_dependency (struct link_map *undef_map, struct link_map *map, int flags) /* Redo the l_reldeps check if undef_map's l_reldeps changed in the mean time. */ - if (undef_map->l_reldeps != NULL) + if (undef_map->l_rw->l_reldeps != NULL) { - if (undef_map->l_reldeps != l_reldeps) + if (undef_map->l_rw->l_reldeps != l_reldeps) { - struct link_map **list = &undef_map->l_reldeps->list[0]; - l_reldepsact = undef_map->l_reldeps->act; + struct link_map **list = &undef_map->l_rw->l_reldeps->list[0]; + l_reldepsact = undef_map->l_rw->l_reldeps->act; for (i = 0; i < l_reldepsact; ++i) if (list[i] == map) goto out_check; } - else if (undef_map->l_reldeps->act > l_reldepsact) + else if (undef_map->l_rw->l_reldeps->act > l_reldepsact) { struct link_map **list - = &undef_map->l_reldeps->list[0]; + = &undef_map->l_rw->l_reldeps->list[0]; i = l_reldepsact; - l_reldepsact = undef_map->l_reldeps->act; + l_reldepsact = undef_map->l_rw->l_reldeps->act; for (; i < l_reldepsact; ++i) if (list[i] == map) goto out_check; @@ -662,14 +662,14 @@ marking %s [%lu] as NODELETE due to reference from %s [%lu]\n", } /* Add the reference now. */ - if (__glibc_unlikely (l_reldepsact >= undef_map->l_reldepsmax)) + if (__glibc_unlikely (l_reldepsact >= undef_map->l_rw->l_reldepsmax)) { /* Allocate more memory for the dependency list. Since this can never happen during the startup phase we can use `realloc'. */ struct link_map_reldeps *newp; - unsigned int max - = undef_map->l_reldepsmax ? undef_map->l_reldepsmax * 2 : 10; + unsigned int max = (undef_map->l_rw->l_reldepsmax + ? undef_map->l_rw->l_reldepsmax * 2 : 10); #ifdef RTLD_PREPARE_FOREIGN_CALL RTLD_PREPARE_FOREIGN_CALL; @@ -696,23 +696,23 @@ marking %s [%lu] as NODELETE due to memory allocation failure\n", else { if (l_reldepsact) - memcpy (&newp->list[0], &undef_map->l_reldeps->list[0], + memcpy (&newp->list[0], &undef_map->l_rw->l_reldeps->list[0], l_reldepsact * sizeof (struct link_map *)); newp->list[l_reldepsact] = map; newp->act = l_reldepsact + 1; atomic_write_barrier (); - void *old = undef_map->l_reldeps; - undef_map->l_reldeps = newp; - undef_map->l_reldepsmax = max; + void *old = undef_map->l_rw->l_reldeps; + undef_map->l_rw->l_reldeps = newp; + undef_map->l_rw->l_reldepsmax = max; if (old) _dl_scope_free (old); } } else { - undef_map->l_reldeps->list[l_reldepsact] = map; + undef_map->l_rw->l_reldeps->list[l_reldepsact] = map; atomic_write_barrier (); - undef_map->l_reldeps->act = l_reldepsact + 1; + undef_map->l_rw->l_reldeps->act = l_reldepsact + 1; } /* Display information if we are debugging. */ diff --git a/elf/dl-object.c b/elf/dl-object.c index 51d3704edc..db9c635c7e 100644 --- a/elf/dl-object.c +++ b/elf/dl-object.c @@ -89,15 +89,20 @@ _dl_new_object (char *realname, const char *libname, int type, # define audit_space 0 #endif - new = (struct link_map *) calloc (sizeof (*new) + audit_space - + sizeof (struct link_map *) - + sizeof (*newname) + libname_len, 1); + new = calloc (sizeof (*new) + + sizeof (struct link_map_private *) + + sizeof (*newname) + libname_len, 1); if (new == NULL) return NULL; + new->l_rw = calloc (1, sizeof (*new->l_rw) + audit_space); + if (new->l_rw == NULL) + { + free (new); + return NULL; + } new->l_real = new; - new->l_symbolic_searchlist.r_list = (struct link_map **) ((char *) (new + 1) - + audit_space); + new->l_symbolic_searchlist.r_list = (struct link_map **) ((char *) (new + 1)); new->l_libname = newname = (struct libname_list *) (new->l_symbolic_searchlist.r_list + 1); @@ -131,7 +136,7 @@ _dl_new_object (char *realname, const char *libname, int type, new->l_used = 1; new->l_loader = loader; #if NO_TLS_OFFSET != 0 - new->l_tls_offset = NO_TLS_OFFSET; + new->l_rw->l_tls_offset = NO_TLS_OFFSET; #endif new->l_ns = nsid; diff --git a/elf/dl-open.c b/elf/dl-open.c index 4fb77e3ff7..85d6bbc7c2 100644 --- a/elf/dl-open.c +++ b/elf/dl-open.c @@ -261,7 +261,7 @@ resize_scopes (struct link_map *new) /* If the initializer has been called already, the object has not been loaded here and now. */ - if (imap->l_init_called && imap->l_type == lt_loaded) + if (imap->l_rw->l_init_called && imap->l_type == lt_loaded) { if (scope_has_map (imap, new)) /* Avoid duplicates. */ @@ -325,7 +325,7 @@ update_scopes (struct link_map *new) struct link_map *imap = new->l_searchlist.r_list[i]; int from_scope = 0; - if (imap->l_init_called && imap->l_type == lt_loaded) + if (imap->l_rw->l_init_called && imap->l_type == lt_loaded) { if (scope_has_map (imap, new)) /* Avoid duplicates. */ @@ -424,7 +424,7 @@ activate_nodelete (struct link_map *new) NODELETE status for objects outside the local scope. */ for (struct link_map *l = GL (dl_ns)[new->l_ns]._ns_loaded; l != NULL; l = l->l_next) - if (l->l_nodelete_pending) + if (l->l_rw->l_nodelete_pending) { if (__glibc_unlikely (GLRO (dl_debug_mask) & DL_DEBUG_FILES)) _dl_debug_printf ("activating NODELETE for %s [%lu]\n", @@ -433,11 +433,11 @@ activate_nodelete (struct link_map *new) /* The flag can already be true at this point, e.g. a signal handler may have triggered lazy binding and set NODELETE status immediately. */ - l->l_nodelete_active = true; + l->l_rw->l_nodelete_active = true; /* This is just a debugging aid, to indicate that activate_nodelete has run for this map. */ - l->l_nodelete_pending = false; + l->l_rw->l_nodelete_pending = false; } } @@ -476,7 +476,7 @@ _dl_open_relocate_one_object (struct dl_open_args *args, struct r_debug *r, _dl_start_profile (); /* Prevent unloading the object. */ - GL(dl_profile_map)->l_nodelete_active = true; + GL(dl_profile_map)->l_rw->l_nodelete_active = true; } } else @@ -505,7 +505,7 @@ is_already_fully_open (struct link_map *map, int mode) /* The object is already in the global scope if requested. */ && (!(mode & RTLD_GLOBAL) || map->l_global) /* The object is already NODELETE if requested. */ - && (!(mode & RTLD_NODELETE) || map->l_nodelete_active)); + && (!(mode & RTLD_NODELETE) || map->l_rw->l_nodelete_active)); } static void @@ -547,7 +547,7 @@ dl_open_worker_begin (void *a) return; /* This object is directly loaded. */ - ++new->l_direct_opencount; + ++new->l_rw->l_direct_opencount; /* It was already open. See is_already_fully_open above. */ if (__glibc_unlikely (new->l_searchlist.r_list != NULL)) @@ -555,7 +555,8 @@ dl_open_worker_begin (void *a) /* Let the user know about the opencount. */ if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_FILES)) _dl_debug_printf ("opening file=%s [%lu]; direct_opencount=%u\n\n", - new->l_name, new->l_ns, new->l_direct_opencount); + new->l_name, new->l_ns, + new->l_rw->l_direct_opencount); #ifdef SHARED /* No relocation processing on this execution path. But @@ -576,10 +577,10 @@ dl_open_worker_begin (void *a) if (__glibc_unlikely (mode & RTLD_NODELETE)) { if (__glibc_unlikely (GLRO (dl_debug_mask) & DL_DEBUG_FILES) - && !new->l_nodelete_active) + && !new->l_rw->l_nodelete_active) _dl_debug_printf ("marking %s [%lu] as NODELETE\n", new->l_name, new->l_ns); - new->l_nodelete_active = true; + new->l_rw->l_nodelete_active = true; } /* Finalize the addition to the global scope. */ @@ -592,7 +593,7 @@ dl_open_worker_begin (void *a) /* Schedule NODELETE marking for the directly loaded object if requested. */ if (__glibc_unlikely (mode & RTLD_NODELETE)) - new->l_nodelete_pending = true; + new->l_rw->l_nodelete_pending = true; /* Load that object's dependencies. */ _dl_map_object_deps (new, NULL, 0, 0, @@ -795,7 +796,7 @@ dl_open_worker (void *a) /* Let the user know about the opencount. */ if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_FILES)) _dl_debug_printf ("opening file=%s [%lu]; direct_opencount=%u\n\n", - new->l_name, new->l_ns, new->l_direct_opencount); + new->l_name, new->l_ns, new->l_rw->l_direct_opencount); } void * @@ -881,7 +882,7 @@ no more namespaces available for dlmopen()")); if (is_already_fully_open (args.map, mode)) { /* We can use the fast path. */ - ++args.map->l_direct_opencount; + ++args.map->l_rw->l_direct_opencount; __rtld_lock_unlock_recursive (GL(dl_load_lock)); return args.map; } diff --git a/elf/dl-reloc.c b/elf/dl-reloc.c index 05bf54bebd..603390498b 100644 --- a/elf/dl-reloc.c +++ b/elf/dl-reloc.c @@ -41,7 +41,7 @@ dynamically loaded. This can only work if there is enough surplus in the static TLS area already allocated for each running thread. If this object's TLS segment is too big to fit, we fail with -1. If it fits, - we set MAP->l_tls_offset and return 0. + we set MAP->l_rw->l_tls_offset and return 0. A portion of the surplus static TLS can be optionally used to optimize dynamic TLS access (with TLSDESC or powerpc TLS optimizations). If OPTIONAL is true then TLS is allocated for such optimization and @@ -53,7 +53,7 @@ _dl_try_allocate_static_tls (struct link_map *map, bool optional) { /* If we've already used the variable with dynamic access, or if the alignment requirements are too high, fail. */ - if (map->l_tls_offset == FORCED_DYNAMIC_TLS_OFFSET + if (map->l_rw->l_tls_offset == FORCED_DYNAMIC_TLS_OFFSET || map->l_tls_align > GLRO (dl_tls_static_align)) { fail: @@ -81,7 +81,7 @@ _dl_try_allocate_static_tls (struct link_map *map, bool optional) size_t offset = GL(dl_tls_static_used) + use; - map->l_tls_offset = GL(dl_tls_static_used) = offset; + map->l_rw->l_tls_offset = GL(dl_tls_static_used) = offset; #elif TLS_DTV_AT_TP /* dl_tls_static_used includes the TCB at the beginning. */ size_t offset = (ALIGN_UP(GL(dl_tls_static_used) @@ -100,7 +100,7 @@ _dl_try_allocate_static_tls (struct link_map *map, bool optional) else if (optional) GL(dl_tls_static_optional) -= use; - map->l_tls_offset = offset; + map->l_rw->l_tls_offset = offset; map->l_tls_firstbyte_offset = GL(dl_tls_static_used); GL(dl_tls_static_used) = used; #else @@ -134,7 +134,7 @@ void __attribute_noinline__ _dl_allocate_static_tls (struct link_map *map) { - if (map->l_tls_offset == FORCED_DYNAMIC_TLS_OFFSET + if (map->l_rw->l_tls_offset == FORCED_DYNAMIC_TLS_OFFSET || _dl_try_allocate_static_tls (map, false)) { _dl_signal_error (0, map->l_name, NULL, N_("\ @@ -150,9 +150,10 @@ void _dl_nothread_init_static_tls (struct link_map *map) { #if TLS_TCB_AT_TP - void *dest = (char *) THREAD_SELF - map->l_tls_offset; + void *dest = (char *) THREAD_SELF - map->l_rw->l_tls_offset; #elif TLS_DTV_AT_TP - void *dest = (char *) THREAD_SELF + map->l_tls_offset + TLS_PRE_TCB_SIZE; + void *dest = ((char *) THREAD_SELF + map->l_rw->l_tls_offset + + TLS_PRE_TCB_SIZE); #else # error "Either TLS_TCB_AT_TP or TLS_DTV_AT_TP must be defined" #endif diff --git a/elf/dl-sort-maps.c b/elf/dl-sort-maps.c index e5740dfc1d..14c323c83e 100644 --- a/elf/dl-sort-maps.c +++ b/elf/dl-sort-maps.c @@ -51,7 +51,7 @@ _dl_sort_maps_original (struct link_map **maps, unsigned int nmaps, { /* Do not handle ld.so in secondary namespaces and objects which are not removed. */ - if (thisp != thisp->l_real || thisp->l_idx == -1) + if (thisp != thisp->l_real || thisp->l_rw->l_idx == -1) goto skip; } @@ -87,10 +87,10 @@ _dl_sort_maps_original (struct link_map **maps, unsigned int nmaps, goto next; } - if (__glibc_unlikely (for_fini && maps[k]->l_reldeps != NULL)) + if (__glibc_unlikely (for_fini && maps[k]->l_rw->l_reldeps != NULL)) { - unsigned int m = maps[k]->l_reldeps->act; - struct link_map **relmaps = &maps[k]->l_reldeps->list[0]; + unsigned int m = maps[k]->l_rw->l_reldeps->act; + struct link_map **relmaps = &maps[k]->l_rw->l_reldeps->list[0]; /* Look through the relocation dependencies of the object. */ while (m-- > 0) @@ -137,32 +137,32 @@ dfs_traversal (struct link_map ***rpo, struct link_map *map, { /* _dl_map_object_deps ignores l_faked objects when calculating the number of maps before calling _dl_sort_maps, ignore them as well. */ - if (map->l_visited || map->l_faked) + if (map->l_rw->l_visited || map->l_faked) return; - map->l_visited = 1; + map->l_rw->l_visited = 1; if (map->l_initfini) { for (int i = 0; map->l_initfini[i] != NULL; i++) { struct link_map *dep = map->l_initfini[i]; - if (dep->l_visited == 0 + if (dep->l_rw->l_visited == 0 && dep->l_main_map == 0) dfs_traversal (rpo, dep, do_reldeps); } } - if (__glibc_unlikely (do_reldeps != NULL && map->l_reldeps != NULL)) + if (__glibc_unlikely (do_reldeps != NULL && map->l_rw->l_reldeps != NULL)) { /* Indicate that we encountered relocation dependencies during traversal. */ *do_reldeps = true; - for (int m = map->l_reldeps->act - 1; m >= 0; m--) + for (int m = map->l_rw->l_reldeps->act - 1; m >= 0; m--) { - struct link_map *dep = map->l_reldeps->list[m]; - if (dep->l_visited == 0 + struct link_map *dep = map->l_rw->l_reldeps->list[m]; + if (dep->l_rw->l_visited == 0 && dep->l_main_map == 0) dfs_traversal (rpo, dep, do_reldeps); } @@ -181,7 +181,7 @@ _dl_sort_maps_dfs (struct link_map **maps, unsigned int nmaps, { struct link_map *first_map = maps[0]; for (int i = nmaps - 1; i >= 0; i--) - maps[i]->l_visited = 0; + maps[i]->l_rw->l_visited = 0; /* We apply DFS traversal for each of maps[i] until the whole total order is found and we're at the start of the Reverse-Postorder (RPO) sequence, @@ -244,7 +244,7 @@ _dl_sort_maps_dfs (struct link_map **maps, unsigned int nmaps, if (do_reldeps) { for (int i = nmaps - 1; i >= 0; i--) - rpo[i]->l_visited = 0; + rpo[i]->l_rw->l_visited = 0; struct link_map **maps_head = &maps[nmaps]; for (int i = nmaps - 1; i >= 0; i--) diff --git a/elf/dl-static-tls.h b/elf/dl-static-tls.h index 3bc29007a3..473d194ed6 100644 --- a/elf/dl-static-tls.h +++ b/elf/dl-static-tls.h @@ -29,8 +29,8 @@ can't be done, we fall back to the error that DF_STATIC_TLS is intended to produce. */ #define HAVE_STATIC_TLS(map, sym_map) \ - (__builtin_expect ((sym_map)->l_tls_offset != NO_TLS_OFFSET \ - && ((sym_map)->l_tls_offset \ + (__builtin_expect ((sym_map)->l_rw->l_tls_offset != NO_TLS_OFFSET \ + && ((sym_map)->l_rw->l_tls_offset \ != FORCED_DYNAMIC_TLS_OFFSET), 1)) #define CHECK_STATIC_TLS(map, sym_map) \ @@ -40,9 +40,9 @@ } while (0) #define TRY_STATIC_TLS(map, sym_map) \ - (__builtin_expect ((sym_map)->l_tls_offset \ + (__builtin_expect ((sym_map)->l_rw->l_tls_offset \ != FORCED_DYNAMIC_TLS_OFFSET, 1) \ - && (__builtin_expect ((sym_map)->l_tls_offset != NO_TLS_OFFSET, 1) \ + && (__builtin_expect ((sym_map)->l_rw->l_tls_offset != NO_TLS_OFFSET, 1)\ || _dl_try_allocate_static_tls (sym_map, true) == 0)) int _dl_try_allocate_static_tls (struct link_map *map, bool optional) diff --git a/elf/dl-support.c b/elf/dl-support.c index a7d5a5e8ab..aa2be3e934 100644 --- a/elf/dl-support.c +++ b/elf/dl-support.c @@ -82,6 +82,7 @@ int _dl_bind_not; static struct link_map _dl_main_map = { .l_name = (char *) "", + .l_rw = &(struct link_map_rw) { .l_tls_offset = NO_TLS_OFFSET, }, .l_real = &_dl_main_map, .l_ns = LM_ID_BASE, .l_libname = &(struct libname_list) { .name = "", .dont_free = 1 }, @@ -98,7 +99,6 @@ static struct link_map _dl_main_map = .l_scope = _dl_main_map.l_scope_mem, .l_local_scope = { &_dl_main_map.l_searchlist }, .l_used = 1, - .l_tls_offset = NO_TLS_OFFSET, .l_serial = 1, }; diff --git a/elf/dl-tls.c b/elf/dl-tls.c index 8306a39e8d..a4a826e6a4 100644 --- a/elf/dl-tls.c +++ b/elf/dl-tls.c @@ -299,7 +299,7 @@ _dl_determine_tlsoffset (void) /* XXX For some architectures we perhaps should store the negative offset. */ - l->l_tls_offset = off; + l->l_rw->l_tls_offset = off; continue; } } @@ -316,7 +316,7 @@ _dl_determine_tlsoffset (void) /* XXX For some architectures we perhaps should store the negative offset. */ - l->l_tls_offset = off; + l->l_rw->l_tls_offset = off; } /* Insert the extra TLS block after the last TLS block. */ @@ -378,9 +378,9 @@ _dl_determine_tlsoffset (void) off += l->l_tls_align; if (off + l->l_tls_blocksize - firstbyte <= freetop) { - l->l_tls_offset = off - firstbyte; + l->l_rw->l_tls_offset = off - firstbyte; freebottom = (off + l->l_tls_blocksize - - firstbyte); +- - firstbyte); continue; } } @@ -389,7 +389,7 @@ _dl_determine_tlsoffset (void) if (off - offset < firstbyte) off += l->l_tls_align; - l->l_tls_offset = off - firstbyte; + l->l_rw->l_tls_offset = off - firstbyte; if (off - firstbyte - offset > freetop - freebottom) { freebottom = offset; @@ -645,17 +645,17 @@ _dl_allocate_tls_init (void *result, bool main_thread) dtv[map->l_tls_modid].pointer.val = TLS_DTV_UNALLOCATED; dtv[map->l_tls_modid].pointer.to_free = NULL; - if (map->l_tls_offset == NO_TLS_OFFSET - || map->l_tls_offset == FORCED_DYNAMIC_TLS_OFFSET) + if (map->l_rw->l_tls_offset == NO_TLS_OFFSET + || map->l_rw->l_tls_offset == FORCED_DYNAMIC_TLS_OFFSET) continue; assert (map->l_tls_modid == total + cnt); assert (map->l_tls_blocksize >= map->l_tls_initimage_size); #if TLS_TCB_AT_TP - assert ((size_t) map->l_tls_offset >= map->l_tls_blocksize); - dest = (char *) result - map->l_tls_offset; + assert ((size_t) map->l_rw->l_tls_offset >= map->l_tls_blocksize); + dest = (char *) result - map->l_rw->l_tls_offset; #elif TLS_DTV_AT_TP - dest = (char *) result + map->l_tls_offset; + dest = (char *) result + map->l_rw->l_tls_offset; #else # error "Either TLS_TCB_AT_TP or TLS_DTV_AT_TP must be defined" #endif @@ -959,22 +959,23 @@ tls_get_addr_tail (tls_index *ti, dtv_t *dtv, struct link_map *the_map) variable into static storage, we'll wait until the address in the static TLS block is set up, and use that. If we're undecided yet, make sure we make the decision holding the lock as well. */ - if (__glibc_unlikely (the_map->l_tls_offset + if (__glibc_unlikely (the_map->l_rw->l_tls_offset != FORCED_DYNAMIC_TLS_OFFSET)) { __rtld_lock_lock_recursive (GL(dl_load_tls_lock)); - if (__glibc_likely (the_map->l_tls_offset == NO_TLS_OFFSET)) + if (__glibc_likely (the_map->l_rw->l_tls_offset == NO_TLS_OFFSET)) { - the_map->l_tls_offset = FORCED_DYNAMIC_TLS_OFFSET; + the_map->l_rw->l_tls_offset = FORCED_DYNAMIC_TLS_OFFSET; __rtld_lock_unlock_recursive (GL(dl_load_tls_lock)); } - else if (__glibc_likely (the_map->l_tls_offset + else if (__glibc_likely (the_map->l_rw->l_tls_offset != FORCED_DYNAMIC_TLS_OFFSET)) { #if TLS_TCB_AT_TP - void *p = (char *) THREAD_SELF - the_map->l_tls_offset; + void *p = (char *) THREAD_SELF - the_map->l_rw->l_tls_offset; #elif TLS_DTV_AT_TP - void *p = (char *) THREAD_SELF + the_map->l_tls_offset + TLS_PRE_TCB_SIZE; + void *p = ((char *) THREAD_SELF + the_map->l_rw->l_tls_offset + + TLS_PRE_TCB_SIZE); #else # error "Either TLS_TCB_AT_TP or TLS_DTV_AT_TP must be defined" #endif @@ -1223,9 +1224,9 @@ static inline void __attribute__((always_inline)) init_one_static_tls (struct pthread *curp, struct link_map *map) { # if TLS_TCB_AT_TP - void *dest = (char *) curp - map->l_tls_offset; + void *dest = (char *) curp - map->l_rw->l_tls_offset; # elif TLS_DTV_AT_TP - void *dest = (char *) curp + map->l_tls_offset + TLS_PRE_TCB_SIZE; + void *dest = (char *) curp + map->l_rw->l_tls_offset + TLS_PRE_TCB_SIZE; # else # error "Either TLS_TCB_AT_TP or TLS_DTV_AT_TP must be defined" # endif diff --git a/elf/get-dynamic-info.h b/elf/get-dynamic-info.h index d3d830e86c..3154bd69b0 100644 --- a/elf/get-dynamic-info.h +++ b/elf/get-dynamic-info.h @@ -163,7 +163,7 @@ elf_get_dynamic_info (struct link_map *l, bool bootstrap, { l->l_flags_1 = info[VERSYMIDX (DT_FLAGS_1)]->d_un.d_val; if (l->l_flags_1 & DF_1_NODELETE) - l->l_nodelete_pending = true; + l->l_rw->l_nodelete_pending = true; /* Only DT_1_SUPPORTED_MASK bits are supported, and we would like to assert this, but we can't. Users have been setting diff --git a/elf/loadtest.c b/elf/loadtest.c index b5eab5e93c..2da6279f2f 100644 --- a/elf/loadtest.c +++ b/elf/loadtest.c @@ -78,7 +78,7 @@ static const struct for (map = MAPS; map != NULL; map = map->l_next) \ if (map->l_type == lt_loaded) \ printf ("name = \"%s\", direct_opencount = %d\n", \ - map->l_name, (int) map->l_direct_opencount); \ + map->l_name, (int) map->l_rw->l_direct_opencount); \ fflush (stdout); \ } \ while (0) @@ -190,7 +190,7 @@ main (int argc, char *argv[]) if (map->l_type == lt_loaded) { printf ("name = \"%s\", direct_opencount = %d\n", - map->l_name, (int) map->l_direct_opencount); + map->l_name, (int) map->l_rw->l_direct_opencount); result = 1; } diff --git a/elf/neededtest.c b/elf/neededtest.c index 3cea499314..eccf4cbb10 100644 --- a/elf/neededtest.c +++ b/elf/neededtest.c @@ -29,7 +29,8 @@ check_loaded_objects (const char **loaded) for (lm = MAPS; lm; lm = lm->l_next) { if (lm->l_name && lm->l_name[0]) - printf(" %s, count = %d\n", lm->l_name, (int) lm->l_direct_opencount); + printf(" %s, count = %d\n", lm->l_name, + (int) lm->l_rw->l_direct_opencount); if (lm->l_type == lt_loaded && lm->l_name) { int match = 0; diff --git a/elf/neededtest2.c b/elf/neededtest2.c index 17c75f2ba3..aa695cd4bb 100644 --- a/elf/neededtest2.c +++ b/elf/neededtest2.c @@ -29,7 +29,8 @@ check_loaded_objects (const char **loaded) for (lm = MAPS; lm; lm = lm->l_next) { if (lm->l_name && lm->l_name[0]) - printf(" %s, count = %d\n", lm->l_name, (int) lm->l_direct_opencount); + printf(" %s, count = %d\n", lm->l_name, + (int) lm->l_rw->l_direct_opencount); if (lm->l_type == lt_loaded && lm->l_name) { int match = 0; diff --git a/elf/neededtest3.c b/elf/neededtest3.c index 41970cf2c7..0b9ee75be3 100644 --- a/elf/neededtest3.c +++ b/elf/neededtest3.c @@ -29,7 +29,8 @@ check_loaded_objects (const char **loaded) for (lm = MAPS; lm; lm = lm->l_next) { if (lm->l_name && lm->l_name[0]) - printf(" %s, count = %d\n", lm->l_name, (int) lm->l_direct_opencount); + printf(" %s, count = %d\n", lm->l_name, + (int) lm->l_rw->l_direct_opencount); if (lm->l_type == lt_loaded && lm->l_name) { int match = 0; diff --git a/elf/neededtest4.c b/elf/neededtest4.c index 0ae0b7ff47..cb4f574265 100644 --- a/elf/neededtest4.c +++ b/elf/neededtest4.c @@ -29,7 +29,8 @@ check_loaded_objects (const char **loaded) for (lm = MAPS; lm; lm = lm->l_next) { if (lm->l_name && lm->l_name[0]) - printf(" %s, count = %d\n", lm->l_name, (int) lm->l_direct_opencount); + printf(" %s, count = %d\n", lm->l_name, + (int) lm->l_rw->l_direct_opencount); if (lm->l_type == lt_loaded && lm->l_name) { int match = 0; diff --git a/elf/rtld.c b/elf/rtld.c index 115f1da37f..1bb369ef2b 100644 --- a/elf/rtld.c +++ b/elf/rtld.c @@ -460,6 +460,17 @@ _dl_start_final (void *arg, struct dl_start_final_info *info) interfere with __rtld_static_init. */ GLRO (dl_find_object) = &_dl_find_object; + /* Pre-allocated read-write status of the ld.so link map. */ + static struct + { + struct link_map_rw l; + struct auditstate _dl_rtld_auditstate[DL_NNS]; + } rtld_map_rw; + _dl_rtld_map.l_rw = &rtld_map_rw.l; +#if NO_TLS_OFFSET != 0 + _dl_rtld_map.l_rw->l_tls_offset = NO_TLS_OFFSET; +#endif + /* If it hasn't happen yet record the startup time. */ rtld_timer_start (&start_time); #if !defined DONT_USE_BOOTSTRAP_MAP @@ -482,7 +493,7 @@ _dl_start_final (void *arg, struct dl_start_final_info *info) /* Copy the TLS related data if necessary. */ #ifndef DONT_USE_BOOTSTRAP_MAP # if NO_TLS_OFFSET != 0 - _dl_rtld_map.l_tls_offset = NO_TLS_OFFSET; + _dl_rtld_map.l_rw->l_tls_offset = NO_TLS_OFFSET; # endif #endif @@ -549,10 +560,6 @@ _dl_start (void *arg) bootstrap_map.l_ld_readonly = DL_RO_DYN_SECTION; elf_get_dynamic_info (&bootstrap_map, true, false); -#if NO_TLS_OFFSET != 0 - bootstrap_map.l_tls_offset = NO_TLS_OFFSET; -#endif - #ifdef ELF_MACHINE_BEFORE_RTLD_RELOC ELF_MACHINE_BEFORE_RTLD_RELOC (&bootstrap_map, bootstrap_map.l_info); #endif @@ -1100,7 +1107,7 @@ rtld_setup_main_map (struct link_map *main_map) /* Perhaps the executable has no PT_LOAD header entries at all. */ main_map->l_map_start = ~0; /* And it was opened directly. */ - ++main_map->l_direct_opencount; + ++main_map->l_rw->l_direct_opencount; main_map->l_contiguous = 1; /* A PT_LOAD segment at an unexpected address will clear the diff --git a/elf/tst-tls_tp_offset.c b/elf/tst-tls_tp_offset.c index a8faebc0eb..ff9a89a125 100644 --- a/elf/tst-tls_tp_offset.c +++ b/elf/tst-tls_tp_offset.c @@ -34,7 +34,8 @@ do_test (void) printf ("thread variable address: %p\n", &thread_var); printf ("thread pointer address: %p\n", __thread_pointer ()); printf ("pthread_self address: %p\n", (void *) pthread_self ()); - ptrdiff_t block_offset = ((struct link_map *) _r_debug.r_map)->l_tls_offset; + ptrdiff_t block_offset + = ((struct link_map *) _r_debug.r_map)->l_rw->l_tls_offset; printf ("main program TLS block offset: %td\n", block_offset); if ((uintptr_t) &thread_var < (uintptr_t) THREAD_SELF) diff --git a/elf/unload.c b/elf/unload.c index 4566f226f8..39d7b1adac 100644 --- a/elf/unload.c +++ b/elf/unload.c @@ -15,7 +15,7 @@ for (map = MAPS; map != NULL; map = map->l_next) \ if (map->l_type == lt_loaded) \ printf ("name = \"%s\", direct_opencount = %d\n", \ - map->l_name, (int) map->l_direct_opencount); \ + map->l_name, (int) map->l_rw->l_direct_opencount); \ fflush (stdout) typedef struct diff --git a/elf/unload2.c b/elf/unload2.c index eef2bfd426..88fdd0a57c 100644 --- a/elf/unload2.c +++ b/elf/unload2.c @@ -12,7 +12,7 @@ for (map = MAPS; map != NULL; map = map->l_next) \ if (map->l_type == lt_loaded) \ printf ("name = \"%s\", direct_opencount = %d\n", \ - map->l_name, (int) map->l_direct_opencount); \ + map->l_name, (int) map->l_rw->l_direct_opencount); \ fflush (stdout) int diff --git a/htl/pt-alloc.c b/htl/pt-alloc.c index c0074b4447..06e36c766e 100644 --- a/htl/pt-alloc.c +++ b/htl/pt-alloc.c @@ -217,9 +217,10 @@ __pthread_init_static_tls (struct link_map *map) continue; # if TLS_TCB_AT_TP - void *dest = (char *) t->tcb - map->l_tls_offset; + void *dest = (char *) t->tcb - map->l_rw->l_tls_offset; # elif TLS_DTV_AT_TP - void *dest = (char *) t->tcb + map->l_tls_offset + TLS_PRE_TCB_SIZE; + void *dest = ((char *) t->tcb + map->l_rw->l_tls_offset + + TLS_PRE_TCB_SIZE); # else # error "Either TLS_TCB_AT_TP or TLS_DTV_AT_TP must be defined" # endif diff --git a/include/link.h b/include/link.h index 518bfd1670..2fddf315d4 100644 --- a/include/link.h +++ b/include/link.h @@ -83,6 +83,71 @@ struct r_search_path_struct extern struct r_search_path_struct __rtld_search_dirs attribute_hidden; extern struct r_search_path_struct __rtld_env_path_list attribute_hidden; + +/* Link map attributes that are always readable and writable. */ +struct link_map_rw +{ + /* List of the dependencies introduced through symbol binding. */ + struct link_map_reldeps + { + unsigned int act; + struct link_map *list[]; + } *l_reldeps; + unsigned int l_reldepsmax; + + /* Reference count for dlopen/dlclose. */ + unsigned int l_direct_opencount; + + /* For objects present at startup time: offset in the static TLS + block. For loaded objects, it can be NO_TLS_OFFSET (not yet + initialized), FORCED_DYNAMIC_TLS_OFFSET (if fully dynamic TLS is + used), or an actual TLS offset (if the static TLS allocation has + been re-used to satisfy dynamic TLS needs). + + This field is written outside the general loader lock, so it has + to reside in the read-write porition of the link map. */ +#ifndef NO_TLS_OFFSET +# define NO_TLS_OFFSET 0 +#endif +#ifndef FORCED_DYNAMIC_TLS_OFFSET +# if NO_TLS_OFFSET == 0 +# define FORCED_DYNAMIC_TLS_OFFSET -1 +# elif NO_TLS_OFFSET == -1 +# define FORCED_DYNAMIC_TLS_OFFSET -2 +# else +# error "FORCED_DYNAMIC_TLS_OFFSET is not defined" +# endif +#endif + ptrdiff_t l_tls_offset; + + /* Number of thread_local objects constructed by this DSO. This is + atomically accessed and modified and is not always protected by the load + lock. See also: CONCURRENCY NOTES in cxa_thread_atexit_impl.c. */ + size_t l_tls_dtor_count; + + /* Ture if ELF constructors have been called. */ + bool l_init_called; + + /* NODELETE status of the map. Only valid for maps of type + lt_loaded. Lazy binding sets l_nodelete_active directly, + potentially from signal handlers. Initial loading of an + DF_1_NODELETE object set l_nodelete_pending. Relocation may + set l_nodelete_pending as well. l_nodelete_pending maps are + promoted to l_nodelete_active status in the final stages of + dlopen, prior to calling ELF constructors. dlclose only + refuses to unload l_nodelete_active maps, the pending status is + ignored. */ + bool l_nodelete_active; + bool l_nodelete_pending; + + /* Used for dependency sorting in dlclose/_dl_fini. These need to + be writable all the time because there is no way to report an + error in _dl_fini. These flags can be moved into struct + link_map_private once _dl_fini no longer re-sorts link maps. */ + bool l_visited; + int l_idx; +}; + /* Structure describing a loaded shared object. The `l_next' and `l_prev' members form a chain of all the shared objects loaded at startup. @@ -111,6 +176,9 @@ struct link_map than one namespace. */ struct link_map *l_real; + /* Run-time writable fields. */ + struct link_map_rw *l_rw; + /* Number of the namespace this link map belongs to. */ Lmid_t l_ns; @@ -170,7 +238,6 @@ struct link_map const Elf_Symndx *l_buckets; }; - unsigned int l_direct_opencount; /* Reference count for dlopen/dlclose. */ enum /* Where this object came from. */ { lt_executable, /* The main executable program. */ @@ -180,12 +247,9 @@ struct link_map unsigned int l_dt_relr_ref:1; /* Nonzero if GLIBC_ABI_DT_RELR is referenced. */ unsigned int l_relocated:1; /* Nonzero if object's relocations done. */ - unsigned int l_init_called:1; /* Nonzero if DT_INIT function called. */ unsigned int l_global:1; /* Nonzero if object in _dl_global_scope. */ unsigned int l_reserved:2; /* Reserved for internal use. */ unsigned int l_main_map:1; /* Nonzero for the map of the main program. */ - unsigned int l_visited:1; /* Used internally for map dependency - graph traversal. */ unsigned int l_map_used:1; /* These two bits are used during traversal */ unsigned int l_map_done:1; /* of maps in _dl_close_worker. */ unsigned int l_phdr_allocated:1; /* Nonzero if the data structure pointed @@ -214,18 +278,6 @@ struct link_map lt_library map. */ unsigned int l_tls_in_slotinfo:1; /* TLS slotinfo updated in dlopen. */ - /* NODELETE status of the map. Only valid for maps of type - lt_loaded. Lazy binding sets l_nodelete_active directly, - potentially from signal handlers. Initial loading of an - DF_1_NODELETE object set l_nodelete_pending. Relocation may - set l_nodelete_pending as well. l_nodelete_pending maps are - promoted to l_nodelete_active status in the final stages of - dlopen, prior to calling ELF constructors. dlclose only - refuses to unload l_nodelete_active maps, the pending status is - ignored. */ - bool l_nodelete_active; - bool l_nodelete_pending; - #include /* Collected information about own RPATH directories. */ @@ -277,14 +329,6 @@ struct link_map /* List of object in order of the init and fini calls. */ struct link_map **l_initfini; - /* List of the dependencies introduced through symbol binding. */ - struct link_map_reldeps - { - unsigned int act; - struct link_map *list[]; - } *l_reldeps; - unsigned int l_reldepsmax; - /* Nonzero if the DSO is used. */ unsigned int l_used; @@ -293,9 +337,6 @@ struct link_map ElfW(Word) l_flags_1; ElfW(Word) l_flags; - /* Temporarily used in `dl_close'. */ - int l_idx; - struct link_map_machine l_mach; struct @@ -318,28 +359,9 @@ struct link_map size_t l_tls_align; /* Offset of first byte module alignment. */ size_t l_tls_firstbyte_offset; -#ifndef NO_TLS_OFFSET -# define NO_TLS_OFFSET 0 -#endif -#ifndef FORCED_DYNAMIC_TLS_OFFSET -# if NO_TLS_OFFSET == 0 -# define FORCED_DYNAMIC_TLS_OFFSET -1 -# elif NO_TLS_OFFSET == -1 -# define FORCED_DYNAMIC_TLS_OFFSET -2 -# else -# error "FORCED_DYNAMIC_TLS_OFFSET is not defined" -# endif -#endif - /* For objects present at startup time: offset in the static TLS block. */ - ptrdiff_t l_tls_offset; /* Index of the module in the dtv array. */ size_t l_tls_modid; - /* Number of thread_local objects constructed by this DSO. This is - atomically accessed and modified and is not always protected by the load - lock. See also: CONCURRENCY NOTES in cxa_thread_atexit_impl.c. */ - size_t l_tls_dtor_count; - /* Information used to change permission after the relocations are done. */ ElfW(Addr) l_relro_addr; @@ -350,15 +372,16 @@ struct link_map #include -/* Information used by audit modules. For most link maps, this data - immediate follows the link map in memory. For the dynamic linker, - it is allocated separately. See link_map_audit_state in - . */ +/* Information used by audit modules. An array of size GLRO (naudit) + elements follows the l_rw link map data in memory (in some cases + conservatively extended to to DL_NNS). */ struct auditstate { uintptr_t cookie; unsigned int bindflags; }; +_Static_assert (__alignof (struct auditstate) <= __alignof (struct link_map_rw), + "auditstate alignment compatible with link_map_rw alignment"); /* This is the hidden instance of struct r_debug_extended used by the diff --git a/nptl/Versions b/nptl/Versions index 3221de89d1..ea1ab9e5a8 100644 --- a/nptl/Versions +++ b/nptl/Versions @@ -404,8 +404,9 @@ libc { _thread_db_dtv_slotinfo_map; _thread_db_dtv_t_counter; _thread_db_dtv_t_pointer_val; + _thread_db_link_map_l_rw; _thread_db_link_map_l_tls_modid; - _thread_db_link_map_l_tls_offset; + _thread_db_link_map_rw_l_tls_offset; _thread_db_list_t_next; _thread_db_list_t_prev; _thread_db_pthread_cancelhandling; diff --git a/nptl_db/db_info.c b/nptl_db/db_info.c index fe7a750485..6748c500a6 100644 --- a/nptl_db/db_info.c +++ b/nptl_db/db_info.c @@ -38,6 +38,7 @@ typedef struct } dtv; typedef struct link_map link_map; +typedef struct link_map_rw link_map_rw; typedef struct rtld_global rtld_global; typedef struct dtv_slotinfo_list dtv_slotinfo_list; typedef struct dtv_slotinfo dtv_slotinfo; diff --git a/nptl_db/structs.def b/nptl_db/structs.def index 93c76c8c3c..90a0752000 100644 --- a/nptl_db/structs.def +++ b/nptl_db/structs.def @@ -93,7 +93,8 @@ DB_STRUCT (pthread_key_data_level2) DB_STRUCT_ARRAY_FIELD (pthread_key_data_level2, data) DB_STRUCT_FIELD (link_map, l_tls_modid) -DB_STRUCT_FIELD (link_map, l_tls_offset) +DB_STRUCT_FIELD (link_map, l_rw) +DB_STRUCT_FIELD (link_map_rw, l_tls_offset) DB_STRUCT_ARRAY_FIELD (dtv, dtv) #define pointer_val pointer.val /* Field of anonymous struct in dtv_t. */ diff --git a/nptl_db/td_thr_tlsbase.c b/nptl_db/td_thr_tlsbase.c index 3e4cdb5ee8..4a7a441e8d 100644 --- a/nptl_db/td_thr_tlsbase.c +++ b/nptl_db/td_thr_tlsbase.c @@ -191,9 +191,15 @@ td_thr_tlsbase (const td_thrhandle_t *th, /* Is the DTV current enough? */ if (dtvgen < modgen) { - try_static_tls: - /* If the module uses Static TLS, we're still good. */ - err = DB_GET_FIELD (temp, th->th_ta_p, map, link_map, l_tls_offset, 0); + try_static_tls:; + /* If the module uses Static TLS, we're still good. Follow the + l_rw pointer to l_tls_offset. */ + psaddr_t l_rw; + err = DB_GET_FIELD (l_rw, th->th_ta_p, map, link_map, l_rw, 0); + if (err != TD_OK) + return err; + err = DB_GET_FIELD (temp, th->th_ta_p, l_rw, link_map_rw, + l_tls_offset, 0); if (err != TD_OK) return err; ptrdiff_t tlsoff = (uintptr_t)temp; diff --git a/stdlib/cxa_thread_atexit_impl.c b/stdlib/cxa_thread_atexit_impl.c index 7e7ac774a4..3e23fbc626 100644 --- a/stdlib/cxa_thread_atexit_impl.c +++ b/stdlib/cxa_thread_atexit_impl.c @@ -133,7 +133,7 @@ __cxa_thread_atexit_impl (dtor_func func, void *obj, void *dso_symbol) _dl_close_worker is protected by the dl_load_lock. The execution in __call_tls_dtors does not really depend on this value beyond the fact that it should be atomic, so Relaxed MO should be sufficient. */ - atomic_fetch_add_relaxed (&lm_cache->l_tls_dtor_count, 1); + atomic_fetch_add_relaxed (&lm_cache->l_rw->l_tls_dtor_count, 1); __rtld_lock_unlock_recursive (GL(dl_load_lock)); new->map = lm_cache; @@ -159,7 +159,7 @@ __call_tls_dtors (void) l_tls_dtor_count decrement. That way, we protect this access from a potential DSO unload in _dl_close_worker, which happens when l_tls_dtor_count is 0. See CONCURRENCY NOTES for more detail. */ - atomic_fetch_add_release (&cur->map->l_tls_dtor_count, -1); + atomic_fetch_add_release (&cur->map->l_rw->l_tls_dtor_count, -1); free (cur); } } diff --git a/sysdeps/aarch64/dl-machine.h b/sysdeps/aarch64/dl-machine.h index bb8f8a9bb1..266ccc2fa0 100644 --- a/sysdeps/aarch64/dl-machine.h +++ b/sysdeps/aarch64/dl-machine.h @@ -249,7 +249,8 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], else # endif { - td->arg = (void*)(sym->st_value + sym_map->l_tls_offset + td->arg = (void*)(sym->st_value + + sym_map->l_rw->l_tls_offset + reloc->r_addend); td->entry = _dl_tlsdesc_return; } @@ -274,7 +275,7 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], { CHECK_STATIC_TLS (map, sym_map); *reloc_addr = - sym->st_value + reloc->r_addend + sym_map->l_tls_offset; + sym->st_value + reloc->r_addend + sym_map->l_rw->l_tls_offset; } break; diff --git a/sysdeps/alpha/dl-machine.h b/sysdeps/alpha/dl-machine.h index b9de9164c7..eb2dc57518 100644 --- a/sysdeps/alpha/dl-machine.h +++ b/sysdeps/alpha/dl-machine.h @@ -401,12 +401,12 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], else if (r_type == R_ALPHA_TPREL64) { # ifdef RTLD_BOOTSTRAP - *reloc_addr = sym_raw_value + map->l_tls_offset; + *reloc_addr = sym_raw_value + map->l_rw->l_tls_offset; # else if (sym_map) { CHECK_STATIC_TLS (map, sym_map); - *reloc_addr = sym_raw_value + sym_map->l_tls_offset; + *reloc_addr = sym_raw_value + sym_map->l_rw->l_tls_offset; } # endif } diff --git a/sysdeps/arc/dl-machine.h b/sysdeps/arc/dl-machine.h index 044cdf1063..8f825d2a5d 100644 --- a/sysdeps/arc/dl-machine.h +++ b/sysdeps/arc/dl-machine.h @@ -284,7 +284,8 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], if (sym != NULL) { CHECK_STATIC_TLS (map, sym_map); - *reloc_addr = sym_map->l_tls_offset + sym->st_value + reloc->r_addend; + *reloc_addr = (sym_map->l_rw->l_tls_offset + sym->st_value + + reloc->r_addend); } break; diff --git a/sysdeps/arm/dl-machine.h b/sysdeps/arm/dl-machine.h index e597c41348..1fbff8a052 100644 --- a/sysdeps/arm/dl-machine.h +++ b/sysdeps/arm/dl-machine.h @@ -394,7 +394,7 @@ elf_machine_rel (struct link_map *map, struct r_scope_elem *scope[], # endif # endif { - td->argument.value = value + sym_map->l_tls_offset; + td->argument.value = value + sym_map->l_rw->l_tls_offset; td->entry = _dl_tlsdesc_return; } } @@ -424,7 +424,7 @@ elf_machine_rel (struct link_map *map, struct r_scope_elem *scope[], if (sym != NULL) { CHECK_STATIC_TLS (map, sym_map); - *reloc_addr += sym->st_value + sym_map->l_tls_offset; + *reloc_addr += sym->st_value + sym_map->l_rw->l_tls_offset; } break; case R_ARM_IRELATIVE: diff --git a/sysdeps/csky/dl-machine.h b/sysdeps/csky/dl-machine.h index dd8ff4a647..47a3e90163 100644 --- a/sysdeps/csky/dl-machine.h +++ b/sysdeps/csky/dl-machine.h @@ -302,7 +302,7 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], if (sym != NULL) { CHECK_STATIC_TLS (map, sym_map); - *reloc_addr = (sym->st_value + sym_map->l_tls_offset + *reloc_addr = (sym->st_value + sym_map->l_rw->l_tls_offset + reloc->r_addend); } break; diff --git a/sysdeps/generic/ldsodefs.h b/sysdeps/generic/ldsodefs.h index e8418973ed..6973fe6dbe 100644 --- a/sysdeps/generic/ldsodefs.h +++ b/sysdeps/generic/ldsodefs.h @@ -1342,15 +1342,9 @@ is_rtld_link_map (const struct link_map *l) static inline struct auditstate * link_map_audit_state (struct link_map *l, size_t index) { - if (is_rtld_link_map (l)) - /* The auditstate array is stored separately. */ - return _dl_rtld_auditstate + index; - else - { - /* The auditstate array follows the link map in memory. */ - struct auditstate *base = (struct auditstate *) (l + 1); - return &base[index]; - } + /* The auditstate array follows the read-write link map part in memory. */ + struct auditstate *base = (struct auditstate *) (l->l_rw + 1); + return &base[index]; } /* Call the la_objsearch from the audit modules from the link map L. If diff --git a/sysdeps/hppa/dl-machine.h b/sysdeps/hppa/dl-machine.h index dd2cf0a050..b285fffb00 100644 --- a/sysdeps/hppa/dl-machine.h +++ b/sysdeps/hppa/dl-machine.h @@ -715,7 +715,8 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], if (sym != NULL) { CHECK_STATIC_TLS (map, sym_map); - value = sym_map->l_tls_offset + sym->st_value + reloc->r_addend; + value = (sym_map->l_rw->l_tls_offset + sym->st_value + + reloc->r_addend); } break; #endif /* use TLS */ diff --git a/sysdeps/i386/dl-machine.h b/sysdeps/i386/dl-machine.h index 87b77429dd..f8acc5bdd7 100644 --- a/sysdeps/i386/dl-machine.h +++ b/sysdeps/i386/dl-machine.h @@ -353,7 +353,8 @@ and creates an unsatisfiable circular dependency.\n", # endif # endif { - td->arg = (void*)(sym->st_value - sym_map->l_tls_offset + td->arg = (void*)(sym->st_value + - sym_map->l_rw->l_tls_offset + (ElfW(Word))td->arg); td->entry = _dl_tlsdesc_return; } @@ -363,7 +364,7 @@ and creates an unsatisfiable circular dependency.\n", case R_386_TLS_TPOFF32: /* The offset is positive, backward from the thread pointer. */ # ifdef RTLD_BOOTSTRAP - *reloc_addr += map->l_tls_offset - sym->st_value; + *reloc_addr += map->l_rw->l_tls_offset - sym->st_value; # else /* We know the offset of object the symbol is contained in. It is a positive value which will be subtracted from the @@ -372,14 +373,14 @@ and creates an unsatisfiable circular dependency.\n", if (sym != NULL) { CHECK_STATIC_TLS (map, sym_map); - *reloc_addr += sym_map->l_tls_offset - sym->st_value; + *reloc_addr += sym_map->l_rw->l_tls_offset - sym->st_value; } # endif break; case R_386_TLS_TPOFF: /* The offset is negative, forward from the thread pointer. */ # ifdef RTLD_BOOTSTRAP - *reloc_addr += sym->st_value - map->l_tls_offset; + *reloc_addr += sym->st_value - map->l_rw->l_tls_offset; # else /* We know the offset of object the symbol is contained in. It is a negative value which will be added to the @@ -387,7 +388,7 @@ and creates an unsatisfiable circular dependency.\n", if (sym != NULL) { CHECK_STATIC_TLS (map, sym_map); - *reloc_addr += sym->st_value - sym_map->l_tls_offset; + *reloc_addr += sym->st_value - sym_map->l_rw->l_tls_offset; } # endif break; diff --git a/sysdeps/loongarch/dl-tls.h b/sysdeps/loongarch/dl-tls.h index b25d599882..e7ee817408 100644 --- a/sysdeps/loongarch/dl-tls.h +++ b/sysdeps/loongarch/dl-tls.h @@ -37,7 +37,7 @@ extern void *__tls_get_addr (tls_index *ti); /* Compute the value for a GOTTPREL reloc. */ #define TLS_TPREL_VALUE(sym_map, sym) \ - ((sym_map)->l_tls_offset + (sym)->st_value - TLS_TP_OFFSET) + ((sym_map)->l_rw->l_tls_offset + (sym)->st_value - TLS_TP_OFFSET) /* Compute the value for a DTPREL reloc. */ #define TLS_DTPREL_VALUE(sym) ((sym)->st_value - TLS_DTV_OFFSET) diff --git a/sysdeps/m68k/dl-tls.h b/sysdeps/m68k/dl-tls.h index 85817fcce9..96a9f5edac 100644 --- a/sysdeps/m68k/dl-tls.h +++ b/sysdeps/m68k/dl-tls.h @@ -35,7 +35,7 @@ typedef struct /* Compute the value for a TPREL reloc. */ #define TLS_TPREL_VALUE(sym_map, sym, reloc) \ - ((sym_map)->l_tls_offset + (sym)->st_value + (reloc)->r_addend \ + ((sym_map)->l_rw->l_tls_offset + (sym)->st_value + (reloc)->r_addend \ - TLS_TP_OFFSET) /* Compute the value for a DTPREL reloc. */ diff --git a/sysdeps/microblaze/dl-machine.h b/sysdeps/microblaze/dl-machine.h index f1c4f7c519..a1cf1b66ce 100644 --- a/sysdeps/microblaze/dl-machine.h +++ b/sysdeps/microblaze/dl-machine.h @@ -262,7 +262,8 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], if (sym != NULL) { CHECK_STATIC_TLS (map, sym_map); - *reloc_addr = sym->st_value + sym_map->l_tls_offset + reloc->r_addend; + *reloc_addr = (sym->st_value + sym_map->l_rw->l_tls_offset + + reloc->r_addend); } } #endif diff --git a/sysdeps/mips/dl-tls.h b/sysdeps/mips/dl-tls.h index c1859719f5..1f3adf8401 100644 --- a/sysdeps/mips/dl-tls.h +++ b/sysdeps/mips/dl-tls.h @@ -35,7 +35,7 @@ typedef struct /* Compute the value for a GOTTPREL reloc. */ #define TLS_TPREL_VALUE(sym_map, sym) \ - ((sym_map)->l_tls_offset + (sym)->st_value - TLS_TP_OFFSET) + ((sym_map)->l_rw->l_tls_offset + (sym)->st_value - TLS_TP_OFFSET) /* Compute the value for a DTPREL reloc. */ #define TLS_DTPREL_VALUE(sym) \ diff --git a/sysdeps/or1k/dl-machine.h b/sysdeps/or1k/dl-machine.h index c91f55554c..4680232a18 100644 --- a/sysdeps/or1k/dl-machine.h +++ b/sysdeps/or1k/dl-machine.h @@ -250,13 +250,13 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], case R_OR1K_TLS_TPOFF: # ifdef RTLD_BOOTSTRAP *reloc_addr = sym->st_value + reloc->r_addend + - map->l_tls_offset - TLS_TCB_SIZE; + map->l_rw->l_tls_offset - TLS_TCB_SIZE; # else if (sym_map != NULL) { CHECK_STATIC_TLS (map, sym_map); *reloc_addr = sym->st_value + reloc->r_addend + - sym_map->l_tls_offset - TLS_TCB_SIZE; + sym_map->l_rw->l_tls_offset - TLS_TCB_SIZE; } # endif break; diff --git a/sysdeps/powerpc/dl-tls.h b/sysdeps/powerpc/dl-tls.h index 52d67a1fa1..8c0b9cbaff 100644 --- a/sysdeps/powerpc/dl-tls.h +++ b/sysdeps/powerpc/dl-tls.h @@ -35,7 +35,7 @@ typedef struct /* Compute the value for a @tprel reloc. */ #define TLS_TPREL_VALUE(sym_map, sym, reloc) \ - ((sym_map)->l_tls_offset + (sym)->st_value + (reloc)->r_addend \ + ((sym_map)->l_rw->l_tls_offset + (sym)->st_value + (reloc)->r_addend \ - TLS_TP_OFFSET) /* Compute the value for a @dtprel reloc. */ diff --git a/sysdeps/powerpc/powerpc32/dl-machine.h b/sysdeps/powerpc/powerpc32/dl-machine.h index 9f95b23233..5d69719148 100644 --- a/sysdeps/powerpc/powerpc32/dl-machine.h +++ b/sysdeps/powerpc/powerpc32/dl-machine.h @@ -354,7 +354,7 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], if (!NOT_BOOTSTRAP) { reloc_addr[0] = 0; - reloc_addr[1] = (sym_map->l_tls_offset - TLS_TP_OFFSET + reloc_addr[1] = (sym_map->l_rw->l_tls_offset - TLS_TP_OFFSET + TLS_DTV_OFFSET); break; } @@ -368,7 +368,7 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], { reloc_addr[0] = 0; /* Set up for local dynamic. */ - reloc_addr[1] = (sym_map->l_tls_offset - TLS_TP_OFFSET + reloc_addr[1] = (sym_map->l_rw->l_tls_offset - TLS_TP_OFFSET + TLS_DTV_OFFSET); break; } diff --git a/sysdeps/powerpc/powerpc64/dl-machine.h b/sysdeps/powerpc/powerpc64/dl-machine.h index d8d7c8b763..116adc079d 100644 --- a/sysdeps/powerpc/powerpc64/dl-machine.h +++ b/sysdeps/powerpc/powerpc64/dl-machine.h @@ -748,7 +748,7 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], { #ifdef RTLD_BOOTSTRAP reloc_addr[0] = 0; - reloc_addr[1] = (sym_map->l_tls_offset - TLS_TP_OFFSET + reloc_addr[1] = (sym_map->l_rw->l_tls_offset - TLS_TP_OFFSET + TLS_DTV_OFFSET); return; #else @@ -762,7 +762,7 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], { reloc_addr[0] = 0; /* Set up for local dynamic. */ - reloc_addr[1] = (sym_map->l_tls_offset - TLS_TP_OFFSET + reloc_addr[1] = (sym_map->l_rw->l_tls_offset - TLS_TP_OFFSET + TLS_DTV_OFFSET); return; } diff --git a/sysdeps/riscv/dl-tls.h b/sysdeps/riscv/dl-tls.h index b8931a0fa5..6d6ccf88a6 100644 --- a/sysdeps/riscv/dl-tls.h +++ b/sysdeps/riscv/dl-tls.h @@ -35,7 +35,7 @@ typedef struct /* Compute the value for a GOTTPREL reloc. */ #define TLS_TPREL_VALUE(sym_map, sym) \ - ((sym_map)->l_tls_offset + (sym)->st_value - TLS_TP_OFFSET) + ((sym_map)->l_rw->l_tls_offset + (sym)->st_value - TLS_TP_OFFSET) /* Compute the value for a DTPREL reloc. */ #define TLS_DTPREL_VALUE(sym) \ diff --git a/sysdeps/s390/s390-32/dl-machine.h b/sysdeps/s390/s390-32/dl-machine.h index d317f679d1..a0e008f459 100644 --- a/sysdeps/s390/s390-32/dl-machine.h +++ b/sysdeps/s390/s390-32/dl-machine.h @@ -339,7 +339,8 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], case R_390_TLS_TPOFF: /* The offset is negative, forward from the thread pointer. */ #ifdef RTLD_BOOTSTRAP - *reloc_addr = sym->st_value + reloc->r_addend - map->l_tls_offset; + *reloc_addr = (sym->st_value + reloc->r_addend + - map->l_rw->l_tls_offset); #else /* We know the offset of the object the symbol is contained in. It is a negative value which will be added to the @@ -348,7 +349,7 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], { CHECK_STATIC_TLS (map, sym_map); *reloc_addr = (sym->st_value + reloc->r_addend - - sym_map->l_tls_offset); + - sym_map->l_rw->l_tls_offset); } #endif break; diff --git a/sysdeps/s390/s390-64/dl-machine.h b/sysdeps/s390/s390-64/dl-machine.h index d6028630b7..5900d12332 100644 --- a/sysdeps/s390/s390-64/dl-machine.h +++ b/sysdeps/s390/s390-64/dl-machine.h @@ -321,7 +321,8 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], case R_390_TLS_TPOFF: /* The offset is negative, forward from the thread pointer. */ #ifdef RTLD_BOOTSTRAP - *reloc_addr = sym->st_value + reloc->r_addend - map->l_tls_offset; + *reloc_addr = (sym->st_value + reloc->r_addend + - map->l_rw->l_tls_offset); #else /* We know the offset of the object the symbol is contained in. It is a negative value which will be added to the @@ -330,7 +331,7 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], { CHECK_STATIC_TLS (map, sym_map); *reloc_addr = (sym->st_value + reloc->r_addend - - sym_map->l_tls_offset); + - sym_map->l_rw->l_tls_offset); } #endif break; diff --git a/sysdeps/sh/dl-machine.h b/sysdeps/sh/dl-machine.h index 2c07474bb4..e93431d107 100644 --- a/sysdeps/sh/dl-machine.h +++ b/sysdeps/sh/dl-machine.h @@ -363,7 +363,8 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], case R_SH_TLS_TPOFF32: /* The offset is positive, afterward from the thread pointer. */ #ifdef RTLD_BOOTSTRAP - *reloc_addr = map->l_tls_offset + sym->st_value + reloc->r_addend; + *reloc_addr = (map->l_rw->l_tls_offset + sym->st_value + + reloc->r_addend); #else /* We know the offset of object the symbol is contained in. It is a positive value which will be added to the thread @@ -372,8 +373,8 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], if (sym != NULL) { CHECK_STATIC_TLS (map, sym_map); - *reloc_addr = sym_map->l_tls_offset + sym->st_value - + reloc->r_addend; + *reloc_addr = (sym_map->l_rw->l_tls_offset + sym->st_value + + reloc->r_addend); } #endif break; diff --git a/sysdeps/sparc/sparc32/dl-machine.h b/sysdeps/sparc/sparc32/dl-machine.h index 0b49766801..130db5aef0 100644 --- a/sysdeps/sparc/sparc32/dl-machine.h +++ b/sysdeps/sparc/sparc32/dl-machine.h @@ -371,7 +371,7 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], if (sym != NULL) { CHECK_STATIC_TLS (map, sym_map); - *reloc_addr = sym->st_value - sym_map->l_tls_offset + *reloc_addr = sym->st_value - sym_map->l_rw->l_tls_offset + reloc->r_addend; } break; @@ -381,7 +381,7 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], if (sym != NULL) { CHECK_STATIC_TLS (map, sym_map); - value = sym->st_value - sym_map->l_tls_offset + value = sym->st_value - sym_map->l_rw->l_tls_offset + reloc->r_addend; if (r_type == R_SPARC_TLS_LE_HIX22) *reloc_addr = (*reloc_addr & 0xffc00000) | ((~value) >> 10); diff --git a/sysdeps/sparc/sparc64/dl-machine.h b/sysdeps/sparc/sparc64/dl-machine.h index b1ccf2320c..2309eea151 100644 --- a/sysdeps/sparc/sparc64/dl-machine.h +++ b/sysdeps/sparc/sparc64/dl-machine.h @@ -387,7 +387,7 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], if (sym != NULL) { CHECK_STATIC_TLS (map, sym_map); - *reloc_addr = sym->st_value - sym_map->l_tls_offset + *reloc_addr = sym->st_value - sym_map->l_rw->l_tls_offset + reloc->r_addend; } break; @@ -397,7 +397,7 @@ elf_machine_rela (struct link_map *map, struct r_scope_elem *scope[], if (sym != NULL) { CHECK_STATIC_TLS (map, sym_map); - value = sym->st_value - sym_map->l_tls_offset + value = sym->st_value - sym_map->l_rw->l_tls_offset + reloc->r_addend; if (r_type == R_SPARC_TLS_LE_HIX22) *(unsigned int *)reloc_addr = diff --git a/sysdeps/x86/dl-prop.h b/sysdeps/x86/dl-prop.h index 8625751427..87033831f2 100644 --- a/sysdeps/x86/dl-prop.h +++ b/sysdeps/x86/dl-prop.h @@ -40,7 +40,7 @@ dl_isa_level_check (struct link_map *m, const char *program) l = m->l_initfini[i]; /* Skip ISA level check if functions have been executed. */ - if (l->l_init_called) + if (l->l_rw->l_init_called) continue; #ifdef SHARED diff --git a/sysdeps/x86_64/dl-machine.h b/sysdeps/x86_64/dl-machine.h index 572a1a7395..681e2bc482 100644 --- a/sysdeps/x86_64/dl-machine.h +++ b/sysdeps/x86_64/dl-machine.h @@ -383,7 +383,8 @@ and creates an unsatisfiable circular dependency.\n", else # endif { - td->arg = (void*)(sym->st_value - sym_map->l_tls_offset + td->arg = (void*)(sym->st_value + - sym_map->l_rw->l_tls_offset + reloc->r_addend); td->entry = _dl_tlsdesc_return; } @@ -399,7 +400,7 @@ and creates an unsatisfiable circular dependency.\n", It is a negative value which will be added to the thread pointer. */ value = (sym->st_value + reloc->r_addend - - sym_map->l_tls_offset); + - sym_map->l_rw->l_tls_offset); # ifdef __ILP32__ /* The symbol and addend values are 32 bits but the GOT entry is 64 bits wide and the whole 64-bit entry is used From patchwork Sun Feb 2 21:13:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105880 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id D94A33858C41 for ; Sun, 2 Feb 2025 21:19:49 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org D94A33858C41 Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=Ea1hWdZE X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTP id 354AD3858D1E for ; Sun, 2 Feb 2025 21:13:45 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 354AD3858D1E Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 354AD3858D1E Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530825; cv=none; b=ffb7owHvMuzE4ECfw4qQogFMwKv2Kls5Qn3YJSpdKaJHPnydYDMXx2HMoo/VeMCvVQ6t1QwJKzpKqW8rN28GcgILtOJ2/O/LWDo+11IvFA5yKq+6H4DTL0ozb0DhPlEOapvJ6Z53d3XACLtBRh8gMtaH8QStDlE4/VeYMUb51ss= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530825; c=relaxed/simple; bh=81TuKzXIm6WdBT0Hw/yZq7zUF/2FJwlnMFzLL60WaIY=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=FeJNfI0uOx9dgJVkgNzPBSWqyngdq/0UW/TBeMk37kTjAnEr8qXTz8Z01plPJ9w6BuMGswwEaQhRb658Olz44/LyHxUQVKm+/eaiWyOlyqGXFeRvhZ/PSNTH7F+QTzD4wWta2/iQcu55vL8eug/B3uZhoMntcx2vQG6tGF/nJoQ= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 354AD3858D1E DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530825; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=fU4x3+3yg6FrsPc2F27nKj5dugQbv6G1aAjP30nqcWM=; b=Ea1hWdZE/zuPRyNF4wnAwFi7FPen2ovHSjX/xkb1borcOblmFqvKMQs+vIXyyrnNFGvMt1 qedphV145ZBdzYeuzF6+c+bexUtEIkbzd0Old2pyW+2YxUO2igymBsu0Z7zMAVVu529b8Y vaFwJoBunMC5Y4it14AlQEt148UZCkU= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-522-P52rhJGDO4qN_EZDdNT99g-1; Sun, 02 Feb 2025 16:13:43 -0500 X-MC-Unique: P52rhJGDO4qN_EZDdNT99g-1 X-Mimecast-MFC-AGG-ID: P52rhJGDO4qN_EZDdNT99g Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id BA93B1956080 for ; Sun, 2 Feb 2025 21:13:42 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C40DF180035E for ; Sun, 2 Feb 2025 21:13:41 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 09/14] elf: Introduce GLPM accessor for the protected memory area In-Reply-To: Message-ID: References: X-From-Line: f61a140e7b885524911e21f4f3cfe56b9a9e40d1 Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:13:38 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: MmfgQMeBDOOiEp7ibU7KslmOI4sPr4mmf0cuAJv-7KM_1738530822 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-10.8 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org The memory area is still allocated in the data segment, so this change is preparatory only. --- elf/dl-load.c | 10 +-- elf/rtld.c | 129 +++++++++++++++++++------------------ elf/setup-vdso.h | 4 +- sysdeps/generic/ldsodefs.h | 22 +++++-- 4 files changed, 91 insertions(+), 74 deletions(-) diff --git a/elf/dl-load.c b/elf/dl-load.c index 2e6c58dfcc..06cbedd8a7 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -733,7 +733,7 @@ _dl_init_paths (const char *llp, const char *source, l = GL(dl_ns)[LM_ID_BASE]._ns_loaded; #ifdef SHARED if (l == NULL) - l = &_dl_rtld_map; + l = &GLPM(dl_rtld_map); #endif assert (l->l_type != lt_loaded); @@ -988,8 +988,8 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, /* When loading into a namespace other than the base one we must avoid loading ld.so since there can only be one copy. Ever. */ if (__glibc_unlikely (nsid != LM_ID_BASE) - && (_dl_file_id_match_p (&id, &_dl_rtld_map.l_file_id) - || _dl_name_match_p (name, &_dl_rtld_map))) + && (_dl_file_id_match_p (&id, &GLPM(dl_rtld_map).l_file_id) + || _dl_name_match_p (name, &GLPM(dl_rtld_map)))) { /* This is indeed ld.so. Create a new link_map which refers to the real one for almost everything. */ @@ -998,7 +998,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, goto fail_new; /* Refer to the real descriptor. */ - l->l_real = &_dl_rtld_map; + l->l_real = &GLPM(dl_rtld_map); /* Copy l_addr and l_ld to avoid a GDB warning with dlmopen(). */ l->l_addr = l->l_real->l_addr; @@ -2034,7 +2034,7 @@ _dl_map_new_object (struct link_map *loader, const char *name, l = (loader ?: GL(dl_ns)[LM_ID_BASE]._ns_loaded # ifdef SHARED - ?: &_dl_rtld_map + ?: &GLPM(dl_rtld_map) # endif ); diff --git a/elf/rtld.c b/elf/rtld.c index 1bb369ef2b..0bc7d9dbcd 100644 --- a/elf/rtld.c +++ b/elf/rtld.c @@ -345,6 +345,7 @@ struct rtld_global _rtld_global = extern struct rtld_global _rtld_local __attribute__ ((alias ("_rtld_global"), visibility ("hidden"))); +struct rtld_protmem _rtld_protmem; /* This variable is similar to _rtld_local, but all values are read-only after relocation. */ @@ -466,9 +467,9 @@ _dl_start_final (void *arg, struct dl_start_final_info *info) struct link_map_rw l; struct auditstate _dl_rtld_auditstate[DL_NNS]; } rtld_map_rw; - _dl_rtld_map.l_rw = &rtld_map_rw.l; + GLPM(dl_rtld_map).l_rw = &rtld_map_rw.l; #if NO_TLS_OFFSET != 0 - _dl_rtld_map.l_rw->l_tls_offset = NO_TLS_OFFSET; + GLPM(dl_rtld_map).l_rw->l_tls_offset = NO_TLS_OFFSET; #endif /* If it hasn't happen yet record the startup time. */ @@ -479,21 +480,22 @@ _dl_start_final (void *arg, struct dl_start_final_info *info) /* Transfer data about ourselves to the permanent link_map structure. */ #ifndef DONT_USE_BOOTSTRAP_MAP - _dl_rtld_map.l_addr = info->l.l_addr; - _dl_rtld_map.l_ld = info->l.l_ld; - _dl_rtld_map.l_ld_readonly = info->l.l_ld_readonly; - memcpy (_dl_rtld_map.l_info, info->l.l_info, sizeof _dl_rtld_map.l_info); - _dl_rtld_map.l_mach = info->l.l_mach; - _dl_rtld_map.l_relocated = 1; + GLPM(dl_rtld_map).l_addr = info->l.l_addr; + GLPM(dl_rtld_map).l_ld = info->l.l_ld; + GLPM(dl_rtld_map).l_ld_readonly = info->l.l_ld_readonly; + memcpy (GLPM(dl_rtld_map).l_info, info->l.l_info, + sizeof GLPM(dl_rtld_map).l_info); + GLPM(dl_rtld_map).l_mach = info->l.l_mach; + GLPM(dl_rtld_map).l_relocated = 1; #endif - _dl_setup_hash (&_dl_rtld_map); - _dl_rtld_map.l_real = &_dl_rtld_map; - _dl_rtld_map.l_map_start = (ElfW(Addr)) &__ehdr_start; - _dl_rtld_map.l_map_end = (ElfW(Addr)) _end; + _dl_setup_hash (&GLPM(dl_rtld_map)); + GLPM(dl_rtld_map).l_real = &GLPM(dl_rtld_map); + GLPM(dl_rtld_map).l_map_start = (ElfW(Addr)) &__ehdr_start; + GLPM(dl_rtld_map).l_map_end = (ElfW(Addr)) _end; /* Copy the TLS related data if necessary. */ #ifndef DONT_USE_BOOTSTRAP_MAP # if NO_TLS_OFFSET != 0 - _dl_rtld_map.l_rw->l_tls_offset = NO_TLS_OFFSET; + GLPM(dl_rtld_map).l_rw->l_tls_offset = NO_TLS_OFFSET; # endif #endif @@ -520,7 +522,7 @@ _dl_start_final (void *arg, struct dl_start_final_info *info) } #ifdef DONT_USE_BOOTSTRAP_MAP -# define bootstrap_map _dl_rtld_map +# define bootstrap_map GLPM(dl_rtld_map) #else # define bootstrap_map info.l #endif @@ -1024,8 +1026,8 @@ ERROR: audit interface '%s' requires version %d (maximum supported version %d); /* The dynamic linker link map is statically allocated, so the cookie in _dl_new_object has not happened. */ - link_map_audit_state (&_dl_rtld_map, GLRO (dl_naudit))->cookie - = (intptr_t) &_dl_rtld_map; + link_map_audit_state (&GLPM(dl_rtld_map), GLRO (dl_naudit))->cookie + = (intptr_t) &GLPM(dl_rtld_map); ++GLRO(dl_naudit); @@ -1052,7 +1054,7 @@ load_audit_modules (struct link_map *main_map, struct audit_list *audit_list) if (GLRO(dl_naudit) > 0) { _dl_audit_objopen (main_map, LM_ID_BASE); - _dl_audit_objopen (&_dl_rtld_map, LM_ID_BASE); + _dl_audit_objopen (&GLPM(dl_rtld_map), LM_ID_BASE); } } @@ -1062,7 +1064,7 @@ static void rtld_chain_load (struct link_map *main_map, char *argv0) { /* The dynamic loader run against itself. */ - const char *rtld_soname = l_soname (&_dl_rtld_map); + const char *rtld_soname = l_soname (&GLPM(dl_rtld_map)); if (l_soname (main_map) != NULL && strcmp (rtld_soname, l_soname (main_map)) == 0) _dl_fatal_printf ("%s: loader cannot load itself\n", rtld_soname); @@ -1149,7 +1151,7 @@ rtld_setup_main_map (struct link_map *main_map) _dl_rtld_libname.name = ((const char *) main_map->l_addr + ph->p_vaddr); /* _dl_rtld_libname.next = NULL; Already zero. */ - _dl_rtld_map.l_libname = &_dl_rtld_libname; + GLPM(dl_rtld_map).l_libname = &_dl_rtld_libname; has_interp = true; break; @@ -1231,16 +1233,16 @@ rtld_setup_main_map (struct link_map *main_map) = (char *) main_map->l_tls_initimage + main_map->l_addr; if (! main_map->l_map_end) main_map->l_map_end = ~0; - if (! _dl_rtld_map.l_libname && _dl_rtld_map.l_name) + if (! GLPM(dl_rtld_map).l_libname && GLPM(dl_rtld_map).l_name) { /* We were invoked directly, so the program might not have a PT_INTERP. */ - _dl_rtld_libname.name = _dl_rtld_map.l_name; + _dl_rtld_libname.name = GLPM(dl_rtld_map).l_name; /* _dl_rtld_libname.next = NULL; Already zero. */ - _dl_rtld_map.l_libname = &_dl_rtld_libname; + GLPM(dl_rtld_map).l_libname = &_dl_rtld_libname; } else - assert (_dl_rtld_map.l_libname); /* How else did we get here? */ + assert (GLPM(dl_rtld_map).l_libname); /* How else did we get here? */ return has_interp; } @@ -1352,7 +1354,7 @@ dl_main (const ElfW(Phdr) *phdr, char **orig_argv = _dl_argv; /* Note the place where the dynamic linker actually came from. */ - _dl_rtld_map.l_name = rtld_progname; + GLPM(dl_rtld_map).l_name = rtld_progname; while (_dl_argc > 1) if (! strcmp (_dl_argv[1], "--list")) @@ -1636,22 +1638,22 @@ dl_main (const ElfW(Phdr) *phdr, /* If the current libname is different from the SONAME, add the latter as well. */ { - const char *soname = l_soname (&_dl_rtld_map); + const char *soname = l_soname (&GLPM(dl_rtld_map)); if (soname != NULL - && strcmp (_dl_rtld_map.l_libname->name, soname) != 0) + && strcmp (GLPM(dl_rtld_map).l_libname->name, soname) != 0) { static struct libname_list newname; newname.name = soname; newname.next = NULL; newname.dont_free = 1; - assert (_dl_rtld_map.l_libname->next == NULL); - _dl_rtld_map.l_libname->next = &newname; + assert (GLPM(dl_rtld_map).l_libname->next == NULL); + GLPM(dl_rtld_map).l_libname->next = &newname; } } /* The ld.so must be relocated since otherwise loading audit modules will fail since they reuse the very same ld.so. */ - assert (_dl_rtld_map.l_relocated); + assert (GLPM(dl_rtld_map).l_relocated); if (! rtld_is_main) { @@ -1681,7 +1683,7 @@ dl_main (const ElfW(Phdr) *phdr, _exit (has_interp ? 0 : 2); } - struct link_map **first_preload = &_dl_rtld_map.l_next; + struct link_map **first_preload = &GLPM(dl_rtld_map).l_next; /* Set up the data structures for the system-supplied DSO early, so they can influence _dl_init_paths. */ setup_vdso (main_map, &first_preload); @@ -1694,20 +1696,20 @@ dl_main (const ElfW(Phdr) *phdr, call_init_paths (&state); /* Initialize _r_debug_extended. */ - struct r_debug *r = _dl_debug_initialize (_dl_rtld_map.l_addr, + struct r_debug *r = _dl_debug_initialize (GLPM(dl_rtld_map).l_addr, LM_ID_BASE); r->r_state = RT_CONSISTENT; /* Put the link_map for ourselves on the chain so it can be found by name. Note that at this point the global chain of link maps contains exactly one element, which is pointed to by dl_loaded. */ - if (! _dl_rtld_map.l_name) + if (! GLPM(dl_rtld_map).l_name) /* If not invoked directly, the dynamic linker shared object file was found by the PT_INTERP name. */ - _dl_rtld_map.l_name = (char *) _dl_rtld_map.l_libname->name; - _dl_rtld_map.l_type = lt_library; - main_map->l_next = &_dl_rtld_map; - _dl_rtld_map.l_prev = main_map; + GLPM(dl_rtld_map).l_name = (char *) GLPM(dl_rtld_map).l_libname->name; + GLPM(dl_rtld_map).l_type = lt_library; + main_map->l_next = &GLPM(dl_rtld_map); + GLPM(dl_rtld_map).l_prev = main_map; ++GL(dl_ns)[LM_ID_BASE]._ns_nloaded; ++GL(dl_load_adds); @@ -1725,8 +1727,8 @@ dl_main (const ElfW(Phdr) *phdr, const ElfW(Phdr) *rtld_phdr = (const void *) rtld_ehdr + rtld_ehdr->e_phoff; - _dl_rtld_map.l_phdr = rtld_phdr; - _dl_rtld_map.l_phnum = rtld_ehdr->e_phnum; + GLPM(dl_rtld_map).l_phdr = rtld_phdr; + GLPM(dl_rtld_map).l_phnum = rtld_ehdr->e_phnum; /* PT_GNU_RELRO is usually the last phdr. */ @@ -1734,15 +1736,15 @@ dl_main (const ElfW(Phdr) *phdr, while (cnt-- > 0) if (rtld_phdr[cnt].p_type == PT_GNU_RELRO) { - _dl_rtld_map.l_relro_addr = rtld_phdr[cnt].p_vaddr; - _dl_rtld_map.l_relro_size = rtld_phdr[cnt].p_memsz; + GLPM(dl_rtld_map).l_relro_addr = rtld_phdr[cnt].p_vaddr; + GLPM(dl_rtld_map).l_relro_size = rtld_phdr[cnt].p_memsz; break; } /* Add the dynamic linker to the TLS list if it also uses TLS. */ - if (_dl_rtld_map.l_tls_blocksize != 0) + if (GLPM(dl_rtld_map).l_tls_blocksize != 0) /* Assign a module ID. Do this before loading any audit modules. */ - _dl_assign_tls_modid (&_dl_rtld_map); + _dl_assign_tls_modid (&GLPM(dl_rtld_map)); audit_list_add_dynamic_tag (&state.audit_list, main_map, DT_AUDIT); audit_list_add_dynamic_tag (&state.audit_list, main_map, DT_DEPAUDIT); @@ -1935,30 +1937,30 @@ dl_main (const ElfW(Phdr) *phdr, for (i = main_map->l_searchlist.r_nlist; i > 0; ) main_map->l_searchlist.r_list[--i]->l_global = 1; - /* Remove _dl_rtld_map from the chain. */ - _dl_rtld_map.l_prev->l_next = _dl_rtld_map.l_next; - if (_dl_rtld_map.l_next != NULL) - _dl_rtld_map.l_next->l_prev = _dl_rtld_map.l_prev; + /* Remove GLPM(dl_rtld_map) from the chain. */ + GLPM(dl_rtld_map).l_prev->l_next = GLPM(dl_rtld_map).l_next; + if (GLPM(dl_rtld_map).l_next != NULL) + GLPM(dl_rtld_map).l_next->l_prev = GLPM(dl_rtld_map).l_prev; for (i = 1; i < main_map->l_searchlist.r_nlist; ++i) - if (is_rtld_link_map (main_map->l_searchlist.r_list[i])) + if (main_map->l_searchlist.r_list[i] == &GLPM(dl_rtld_map)) break; /* Insert the link map for the dynamic loader into the chain in symbol search order because gdb uses the chain's order as its symbol search order. */ - _dl_rtld_map.l_prev = main_map->l_searchlist.r_list[i - 1]; + GLPM(dl_rtld_map).l_prev = main_map->l_searchlist.r_list[i - 1]; if (__glibc_likely (state.mode == rtld_mode_normal)) { - _dl_rtld_map.l_next = (i + 1 < main_map->l_searchlist.r_nlist - ? main_map->l_searchlist.r_list[i + 1] - : NULL); + GLPM(dl_rtld_map).l_next = (i + 1 < main_map->l_searchlist.r_nlist + ? main_map->l_searchlist.r_list[i + 1] + : NULL); #ifdef NEED_DL_SYSINFO_DSO if (GLRO(dl_sysinfo_map) != NULL - && _dl_rtld_map.l_prev->l_next == GLRO(dl_sysinfo_map) - && _dl_rtld_map.l_next != GLRO(dl_sysinfo_map)) - _dl_rtld_map.l_prev = GLRO(dl_sysinfo_map); + && (GLPM(dl_rtld_map).l_prev->l_next == GLRO(dl_sysinfo_map)) + && (GLPM(dl_rtld_map).l_next != GLRO(dl_sysinfo_map))) + GLPM(dl_rtld_map).l_prev = GLRO(dl_sysinfo_map); #endif } else @@ -1967,14 +1969,14 @@ dl_main (const ElfW(Phdr) *phdr, In this case it doesn't matter much where we put the interpreter object, so we just initialize the list pointer so that the assertion below holds. */ - _dl_rtld_map.l_next = _dl_rtld_map.l_prev->l_next; + GLPM(dl_rtld_map).l_next = GLPM(dl_rtld_map).l_prev->l_next; - assert (_dl_rtld_map.l_prev->l_next == _dl_rtld_map.l_next); - _dl_rtld_map.l_prev->l_next = &_dl_rtld_map; - if (_dl_rtld_map.l_next != NULL) + assert (GLPM(dl_rtld_map).l_prev->l_next == GLPM(dl_rtld_map).l_next); + GLPM(dl_rtld_map).l_prev->l_next = &GLPM(dl_rtld_map); + if (GLPM(dl_rtld_map).l_next != NULL) { - assert (_dl_rtld_map.l_next->l_prev == _dl_rtld_map.l_prev); - _dl_rtld_map.l_next->l_prev = &_dl_rtld_map; + assert (GLPM(dl_rtld_map).l_next->l_prev == GLPM(dl_rtld_map).l_prev); + GLPM(dl_rtld_map).l_next->l_prev = &GLPM(dl_rtld_map); } /* Now let us see whether all libraries are available in the @@ -2116,7 +2118,7 @@ dl_main (const ElfW(Phdr) *phdr, while (i-- > 0) { struct link_map *l = main_map->l_initfini[i]; - if (l != &_dl_rtld_map && ! l->l_faked) + if (l != &GLPM(dl_rtld_map) && ! l->l_faked) { args.l = l; _dl_receive_error (print_unresolved, relocate_doit, @@ -2315,7 +2317,8 @@ dl_main (const ElfW(Phdr) *phdr, { RTLD_TIMING_VAR (start); rtld_timer_start (&start); - _dl_relocate_object_no_relro (&_dl_rtld_map, main_map->l_scope, 0, 0); + _dl_relocate_object_no_relro (&GLPM(dl_rtld_map), main_map->l_scope, + 0, 0); rtld_timer_accum (&relocate_time, start); __rtld_mutex_init (); @@ -2323,7 +2326,7 @@ dl_main (const ElfW(Phdr) *phdr, } /* All ld.so initialization is complete. Apply RELRO. */ - _dl_protect_relro (&_dl_rtld_map); + _dl_protect_relro (&GLPM(dl_rtld_map)); /* Relocation is complete. Perform early libc initialization. This is the initial libc, even if audit modules have been loaded with diff --git a/elf/setup-vdso.h b/elf/setup-vdso.h index 935d9e3baf..fd5a1314bd 100644 --- a/elf/setup-vdso.h +++ b/elf/setup-vdso.h @@ -92,8 +92,8 @@ setup_vdso (struct link_map *main_map __attribute__ ((unused)), /* Rearrange the list so this DSO appears after rtld_map. */ assert (l->l_next == NULL); assert (l->l_prev == main_map); - _dl_rtld_map.l_next = l; - l->l_prev = &_dl_rtld_map; + GLPM(dl_rtld_map).l_next = l; + l->l_prev = &GLPM(dl_rtld_map); *first_preload = &l->l_next; # else GL(dl_nns) = 1; diff --git a/sysdeps/generic/ldsodefs.h b/sysdeps/generic/ldsodefs.h index 6973fe6dbe..ac71668f29 100644 --- a/sysdeps/generic/ldsodefs.h +++ b/sysdeps/generic/ldsodefs.h @@ -508,6 +508,23 @@ extern struct rtld_global _rtld_global __rtld_global_attribute__; # undef __rtld_global_attribute__ #endif +#ifdef SHARED +/* Implementation structure for the protected memory area. */ +struct rtld_protmem +{ + /* Structure describing the dynamic linker itself. */ + struct link_map _dl_rtld_map; +}; +extern struct rtld_protmem _rtld_protmem attribute_hidden; +#endif /* SHARED */ + +/* GLPM(FIELD) denotes the FIELD in the protected memory area. */ +#ifdef SHARED +# define GLPM(name) _rtld_protmem._##name +#else +# define GLPM(name) _##name +#endif + #ifndef SHARED # define GLRO(name) _##name #else @@ -1325,9 +1342,6 @@ rtld_active (void) return GLRO(dl_init_all_dirs) != NULL; } -/* Pre-allocated link map for the dynamic linker itself. */ -extern struct link_map _dl_rtld_map attribute_hidden; - /* Used to store the audit information for the link map of the dynamic loader. */ extern struct auditstate _dl_rtld_auditstate[DL_NNS] attribute_hidden; @@ -1336,7 +1350,7 @@ extern struct auditstate _dl_rtld_auditstate[DL_NNS] attribute_hidden; static inline bool is_rtld_link_map (const struct link_map *l) { - return l == &_dl_rtld_map; + return l == &GLPM(dl_rtld_map); } static inline struct auditstate * From patchwork Sun Feb 2 21:13:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105882 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 760CE3858C35 for ; Sun, 2 Feb 2025 21:22:49 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 760CE3858C35 Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=MCgQR8hj X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTP id A999F3858C50 for ; Sun, 2 Feb 2025 21:13:51 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org A999F3858C50 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org A999F3858C50 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530831; cv=none; b=ZMyKpQm4ADeazVFgyaUuMpnZcUft3Knc1EZVpQG7xmbstqe1rN9YotESRojLwht1G37QmnjxADaCJw6FNQgknMZBukbV9gChl/Yg8HU5dMYay3Xa8JN+pRp5cQGJ0M2V5pDqwHo6e8psrJOVAiKe5Z9VSW/8R4N8Sw7gYsAlWHo= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530831; c=relaxed/simple; bh=WIjlT0fEI2nU9KnK2/d3lTr6BpbK1iZaD6/WpNQoVX4=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=xgsz9DTSYjqq9vhffmdZOD6JEm4LWh4OramsFEl7kuUeYEszkjBBDZjIs7UK0TvgCHk1CmG41DGOsaRbAuHTzuKgqDjZLvd9k2yYaLH94+E1mt5y7exZaxG83Cw0lug+CXOTr0vW9m3MYK9BZyUL3nExAuLi456KxjBi2pyKaMo= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org A999F3858C50 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530831; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=4QhA2IWQLxQmzG7YIQjidbO6xEuDGI5F8kiqTUOywJQ=; b=MCgQR8hjWU5p5OYjljlvjcebxnVwrI/8PCRQI0iBZsU+ur22VhMB5L0oE8iHERjVKywoZJ gE0iRqThYnbGw/6YDGG32iNaLYtssGb4nd72qoCr+7XmLjQ9+neiYzJCFg0iyXHcEnV25B bq/FlSGaie0+9O27dxlXeWsYyMhc44M= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-216-PTaCHeLLPui2LTsL0jMn-w-1; Sun, 02 Feb 2025 16:13:50 -0500 X-MC-Unique: PTaCHeLLPui2LTsL0jMn-w-1 X-Mimecast-MFC-AGG-ID: PTaCHeLLPui2LTsL0jMn-w Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 0289119560AA for ; Sun, 2 Feb 2025 21:13:49 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 9C7101956094 for ; Sun, 2 Feb 2025 21:13:47 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 10/14] elf: Bootstrap allocation for future protected memory allocator In-Reply-To: Message-ID: <79fc25bb098ab1671534a83c201df942d7fba5e1.1738530302.git.fweimer@redhat.com> References: X-From-Line: 79fc25bb098ab1671534a83c201df942d7fba5e1 Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:13:44 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: GFrH5I6nuvhNN89Jdv1uSrs2ao_az1bOVw8IFh_TSHQ_1738530829 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-10.8 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org A subsequent change will place link maps into memory which is read-only most of the time. This means that the link map for ld.so itself (GLPM (dl_rtld_map)) needs to be put there as well, which requires allocating it dynamically. --- elf/Makefile | 1 + elf/dl-protmem_bootstrap.h | 29 ++++ elf/rtld.c | 87 ++++++---- elf/tst-rtld-nomem.c | 177 ++++++++++++++++++++ sysdeps/generic/dl-early_mmap.h | 35 ++++ sysdeps/generic/ldsodefs.h | 6 +- sysdeps/mips/Makefile | 6 + sysdeps/unix/sysv/linux/dl-early_allocate.c | 17 +- sysdeps/unix/sysv/linux/dl-early_mmap.h | 41 +++++ 9 files changed, 345 insertions(+), 54 deletions(-) create mode 100644 elf/dl-protmem_bootstrap.h create mode 100644 elf/tst-rtld-nomem.c create mode 100644 sysdeps/generic/dl-early_mmap.h create mode 100644 sysdeps/unix/sysv/linux/dl-early_mmap.h diff --git a/elf/Makefile b/elf/Makefile index 5c833871d0..1d93993241 100644 --- a/elf/Makefile +++ b/elf/Makefile @@ -463,6 +463,7 @@ tests += \ tst-rtld-no-malloc \ tst-rtld-no-malloc-audit \ tst-rtld-no-malloc-preload \ + tst-rtld-nomem \ tst-rtld-run-static \ tst-single_threaded \ tst-single_threaded-pthread \ diff --git a/elf/dl-protmem_bootstrap.h b/elf/dl-protmem_bootstrap.h new file mode 100644 index 0000000000..a2fc267a2d --- /dev/null +++ b/elf/dl-protmem_bootstrap.h @@ -0,0 +1,29 @@ +/* Bootstrap allocation for the protected memory area. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include + +/* Return a pointer to the protected memory area, or NULL if + allocation fails. This function is called before self-relocation, + and the system call needs to be inlined for (most) + HIDDEN_VAR_NEEDS_DYNAMIC_RELOC targets. */ +static inline __attribute__ ((always_inline)) struct rtld_protmem * +_dl_protmem_bootstrap (void) +{ + return _dl_early_mmap (sizeof (struct rtld_protmem)); +} diff --git a/elf/rtld.c b/elf/rtld.c index 0bc7d9dbcd..de9e87cd0b 100644 --- a/elf/rtld.c +++ b/elf/rtld.c @@ -53,6 +53,7 @@ #include #include #include +#include #include @@ -345,8 +346,6 @@ struct rtld_global _rtld_global = extern struct rtld_global _rtld_local __attribute__ ((alias ("_rtld_global"), visibility ("hidden"))); -struct rtld_protmem _rtld_protmem; - /* This variable is similar to _rtld_local, but all values are read-only after relocation. */ struct rtld_global_ro _rtld_global_ro attribute_relro = @@ -421,6 +420,7 @@ static ElfW(Addr) _dl_start_final (void *arg); struct dl_start_final_info { struct link_map l; + struct rtld_protmem *protmem; RTLD_TIMING_VAR (start_time); }; static ElfW(Addr) _dl_start_final (void *arg, @@ -455,6 +455,14 @@ _dl_start_final (void *arg, struct dl_start_final_info *info) { ElfW(Addr) start_addr; +#ifndef DONT_USE_BOOTSTRAP_MAP + GLRO (dl_protmem) = info->protmem; +#endif + + /* Delayed error reporting after relocation processing. */ + if (GLRO (dl_protmem) == NULL) + _dl_fatal_printf ("Fatal glibc error: Cannot allocate link map\n"); + __rtld_malloc_init_stubs (); /* Do not use an initializer for these members because it would @@ -478,21 +486,10 @@ _dl_start_final (void *arg, struct dl_start_final_info *info) RTLD_TIMING_SET (start_time, info->start_time); #endif - /* Transfer data about ourselves to the permanent link_map structure. */ -#ifndef DONT_USE_BOOTSTRAP_MAP - GLPM(dl_rtld_map).l_addr = info->l.l_addr; - GLPM(dl_rtld_map).l_ld = info->l.l_ld; - GLPM(dl_rtld_map).l_ld_readonly = info->l.l_ld_readonly; - memcpy (GLPM(dl_rtld_map).l_info, info->l.l_info, - sizeof GLPM(dl_rtld_map).l_info); - GLPM(dl_rtld_map).l_mach = info->l.l_mach; - GLPM(dl_rtld_map).l_relocated = 1; -#endif _dl_setup_hash (&GLPM(dl_rtld_map)); GLPM(dl_rtld_map).l_real = &GLPM(dl_rtld_map); GLPM(dl_rtld_map).l_map_start = (ElfW(Addr)) &__ehdr_start; GLPM(dl_rtld_map).l_map_end = (ElfW(Addr)) _end; - /* Copy the TLS related data if necessary. */ #ifndef DONT_USE_BOOTSTRAP_MAP # if NO_TLS_OFFSET != 0 GLPM(dl_rtld_map).l_rw->l_tls_offset = NO_TLS_OFFSET; @@ -537,43 +534,59 @@ _dl_start (void *arg) rtld_timer_start (&info.start_time); #endif - /* Partly clean the `bootstrap_map' structure up. Don't use - `memset' since it might not be built in or inlined and we cannot - make function calls at this point. Use '__builtin_memset' if we - know it is available. We do not have to clear the memory if we - do not have to use the temporary bootstrap_map. Global variables - are initialized to zero by default. */ -#ifndef DONT_USE_BOOTSTRAP_MAP -# ifdef HAVE_BUILTIN_MEMSET - __builtin_memset (bootstrap_map.l_info, '\0', sizeof (bootstrap_map.l_info)); -# else - for (size_t cnt = 0; - cnt < sizeof (bootstrap_map.l_info) / sizeof (bootstrap_map.l_info[0]); - ++cnt) - bootstrap_map.l_info[cnt] = 0; -# endif + struct rtld_protmem *protmem = _dl_protmem_bootstrap (); + bool protmem_failed = protmem == NULL; + if (protmem_failed) + { + /* Allocate some space for a stub protected memory area on the + stack, to get to the point when we can report the error. */ + protmem = alloca (sizeof (*protmem)); + + /* Partly clean the `bootstrap_map' structure up. Don't use + `memset' since it might not be built in or inlined and we + cannot make function calls at this point. Use + '__builtin_memset' if we know it is available. */ +#ifdef HAVE_BUILTIN_MEMSET + __builtin_memset (protmem->_dl_rtld_map.l_info, + '\0', sizeof (protmem->_dl_rtld_map.l_info)); +#else + for (size_t i = 0; i < array_length (protmem->_dl_rtld_map.l_info); ++i) + protmem->_dl_rtld_map.l_info[i] = NULL; #endif + } /* Figure out the run-time load address of the dynamic linker itself. */ - bootstrap_map.l_addr = elf_machine_load_address (); + protmem->_dl_rtld_map.l_addr = elf_machine_load_address (); /* Read our own dynamic section and fill in the info array. */ - bootstrap_map.l_ld = (void *) bootstrap_map.l_addr + elf_machine_dynamic (); - bootstrap_map.l_ld_readonly = DL_RO_DYN_SECTION; - elf_get_dynamic_info (&bootstrap_map, true, false); + protmem->_dl_rtld_map.l_ld = ((void *) protmem->_dl_rtld_map.l_addr + + elf_machine_dynamic ()); + protmem->_dl_rtld_map.l_ld_readonly = DL_RO_DYN_SECTION; + elf_get_dynamic_info (&protmem->_dl_rtld_map, true, false); #ifdef ELF_MACHINE_BEFORE_RTLD_RELOC - ELF_MACHINE_BEFORE_RTLD_RELOC (&bootstrap_map, bootstrap_map.l_info); + ELF_MACHINE_BEFORE_RTLD_RELOC (&protmem->_dl_rtld_map, + protmem->_dl_rtld_map.l_info); #endif - if (bootstrap_map.l_addr) + if (protmem->_dl_rtld_map.l_addr) { /* Relocate ourselves so we can do normal function calls and data access using the global offset table. */ - ELF_DYNAMIC_RELOCATE (&bootstrap_map, NULL, 0, 0, 0); + ELF_DYNAMIC_RELOCATE (&protmem->_dl_rtld_map, NULL, 0, 0, 0); } - bootstrap_map.l_relocated = 1; + protmem->_dl_rtld_map.l_relocated = 1; + + /* Communicate the original mmap failure to _dl_start_final. */ + if (protmem_failed) + protmem = NULL; + +#ifdef DONT_USE_BOOTSTRAP_MAP + GLRO (dl_protmem) = protmem; +#else + info.protmem = protmem; +#endif /* Please note that we don't allow profiling of this object and therefore need not test whether we have to allocate the array @@ -1024,7 +1037,7 @@ ERROR: audit interface '%s' requires version %d (maximum supported version %d); else *last_audit = (*last_audit)->next = &newp->ifaces; - /* The dynamic linker link map is statically allocated, so the + /* The dynamic linker link map is allocated separately, so the cookie in _dl_new_object has not happened. */ link_map_audit_state (&GLPM(dl_rtld_map), GLRO (dl_naudit))->cookie = (intptr_t) &GLPM(dl_rtld_map); diff --git a/elf/tst-rtld-nomem.c b/elf/tst-rtld-nomem.c new file mode 100644 index 0000000000..b8caf5d8fe --- /dev/null +++ b/elf/tst-rtld-nomem.c @@ -0,0 +1,177 @@ +/* Test that out-of-memory during early ld.so startup reports an error. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* This test invokes execve with increasing RLIMIT_AS limits, to + trigger the early _dl_protmem_bootstrap memory allocation failure + and check that a proper error is reported for it. */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static int +do_test (void) +{ + long int page_size = sysconf (_SC_PAGE_SIZE); + TEST_VERIFY (page_size > 0); + + struct rlimit rlim; + TEST_COMPARE (getrlimit (RLIMIT_AS, &rlim), 0); + + /* Reduced once we encounter success. */ + int kb_limit = 2048; + + /* Exit status in case of test error. */ + enum { unexpected_error = 17 }; + + /* Used to verify that at least one execve crash is encountered. + This is how execve reports late memory allocation failures due + to rlimit. */ + bool crash_seen = false; + + /* Set to true if the early out-of-memory error message is + encountered. */ + bool oom_error_seen = false; + + /* Set to true once success (the usage message) is encountered. + This is expected to happen only after oom_error_seen turns true, + otherwise the rlimit does not work. */ + bool success_seen = false; + + /* Try increasing rlimits. The kernel rounds down to page sizes, so + try only page size increments. */ + for (int kb = 128; kb <= kb_limit; kb += page_size / 1024) + { + printf ("info: trying %d KiB\n", kb); + + int pipe_stdout[2]; + xpipe (pipe_stdout); + int pipe_stderr[2]; + xpipe (pipe_stderr); + + pid_t pid = xfork (); + if (pid == 0) + { + /* Restrict address space for the ld.so invocation. */ + rlim.rlim_cur = kb * 1024; + int ret = setrlimit (RLIMIT_AS, &rlim); + TEST_COMPARE (ret, 0); + if (ret != 0) + _exit (unexpected_error); + + /* Redirect output for capture. */ + TEST_COMPARE (dup2 (pipe_stdout[1], STDOUT_FILENO), + STDOUT_FILENO); + TEST_COMPARE (dup2 (pipe_stderr[1], STDERR_FILENO), + STDERR_FILENO); + + /* Try to invoke ld.so with the resource limit in place. */ + char ldso[] = "ld.so"; + char *const argv[] = { ldso, NULL }; + execve (support_objdir_elf_ldso, argv, &argv[1]); + TEST_COMPARE (errno, ENOMEM); + _exit (unexpected_error); + } + + int status; + xwaitpid (pid, &status, 0); + + xclose (pipe_stdout[1]); + xclose (pipe_stderr[1]); + + /* No output on stdout. */ + char actual[1024]; + ssize_t count = read (pipe_stdout[0], actual, sizeof (actual)); + if (count < 0) + FAIL_EXIT1 ("read stdout: %m"); + TEST_COMPARE_BLOB ("", 0, actual, count); + + /* Read the standard error output. */ + count = read (pipe_stderr[0], actual, sizeof (actual)); + if (count < 0) + FAIL_EXIT1 ("read stderr: %m"); + + if (WIFEXITED (status) && WEXITSTATUS (status) == 1) + { + TEST_VERIFY (oom_error_seen); + static const char expected[] = "\ +ld.so: missing program name\n\ +Try 'ld.so --help' for more information.\n\ +"; + TEST_COMPARE_BLOB (expected, strlen (expected), actual, count); + if (!success_seen) + { + puts ("info: first success"); + /* Four more tries with increasing rlimit, to catch + potential secondary crashes. */ + kb_limit = kb + page_size / 1024 * 4; + } + success_seen = true; + continue; + } + if (WIFEXITED (status) && WEXITSTATUS (status) == 127) + { + TEST_VERIFY (crash_seen); + TEST_VERIFY (!success_seen); + static const char expected[] = + "Fatal glibc error: Cannot allocate link map\n"; + TEST_COMPARE_BLOB (expected, strlen (expected), actual, count); + if (!oom_error_seen) + puts ("info: first memory allocation error"); + oom_error_seen = true; + continue; + } + + TEST_VERIFY (!success_seen); + TEST_VERIFY (!oom_error_seen); + + if (WIFEXITED (status)) + { + /* Unexpected regular exit status. */ + TEST_COMPARE (WIFEXITED (status), 1); + TEST_COMPARE_BLOB ("", 0, actual, count); + } + else if (WIFSIGNALED (status) && WTERMSIG (status) == SIGSEGV) + { + /* Very early out of memory. No output expected. */ + TEST_COMPARE_BLOB ("", 0, actual, count); + if (!crash_seen) + puts ("info: first expected crash observed"); + crash_seen = true; + } + else + { + /* Unexpected status. */ + printf ("error: unexpected exit status %d\n", status); + support_record_failure (); + TEST_COMPARE_BLOB ("", 0, actual, count); + } + } + + return 0; +} + +#include diff --git a/sysdeps/generic/dl-early_mmap.h b/sysdeps/generic/dl-early_mmap.h new file mode 100644 index 0000000000..75eb8eb30c --- /dev/null +++ b/sysdeps/generic/dl-early_mmap.h @@ -0,0 +1,35 @@ +/* Early anonymous mmap for ld.so, before self-relocation. Generic version. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef DL_EARLY_MMAP_H +#define DL_EARLY_MMAP_H + +/* The generic version assumes that regular mmap works. It returns + NULL on failure. */ +static inline void * +_dl_early_mmap (size_t size) +{ + void *ret = __mmap (NULL, size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + if (ret == MAP_FAILED) + return NULL; + else + return ret; +} + +#endif /* DL_EARLY_MMAP_H */ diff --git a/sysdeps/generic/ldsodefs.h b/sysdeps/generic/ldsodefs.h index ac71668f29..0ff0650cb1 100644 --- a/sysdeps/generic/ldsodefs.h +++ b/sysdeps/generic/ldsodefs.h @@ -515,12 +515,11 @@ struct rtld_protmem /* Structure describing the dynamic linker itself. */ struct link_map _dl_rtld_map; }; -extern struct rtld_protmem _rtld_protmem attribute_hidden; #endif /* SHARED */ /* GLPM(FIELD) denotes the FIELD in the protected memory area. */ #ifdef SHARED -# define GLPM(name) _rtld_protmem._##name +# define GLPM(name) GLRO (dl_protmem)->_##name #else # define GLPM(name) _##name #endif @@ -660,6 +659,9 @@ struct rtld_global_ro EXTERN enum dso_sort_algorithm _dl_dso_sort_algo; #ifdef SHARED + /* Pointer to the protected memory area. */ + EXTERN struct rtld_protmem *_dl_protmem; + /* We add a function table to _rtld_global which is then used to call the function instead of going through the PLT. The result is that we can avoid exporting the functions and we do not jump diff --git a/sysdeps/mips/Makefile b/sysdeps/mips/Makefile index d189973aa0..2ec9bf2a6c 100644 --- a/sysdeps/mips/Makefile +++ b/sysdeps/mips/Makefile @@ -32,6 +32,12 @@ test-xfail-tst-audit24d = yes test-xfail-tst-audit25a = yes test-xfail-tst-audit25b = yes +# _dl_start performs a system call before self-relocation, to allocate +# the link map for ld.so itself. This involves a direct function +# call. Build rtld.c in MIPS32 mode, so that this function call does +# not require a run-time relocation. +CFLAGS-rtld.c += -mno-mips16 + ifneq ($(o32-fpabi),) tests += tst-abi-interlink diff --git a/sysdeps/unix/sysv/linux/dl-early_allocate.c b/sysdeps/unix/sysv/linux/dl-early_allocate.c index 257519b789..ca7121d52e 100644 --- a/sysdeps/unix/sysv/linux/dl-early_allocate.c +++ b/sysdeps/unix/sysv/linux/dl-early_allocate.c @@ -29,7 +29,7 @@ #include #include -#include +#include /* Defined in brk.c. */ extern void *__curbrk; @@ -63,20 +63,7 @@ _dl_early_allocate (size_t size) unfortunate ASLR layout decisions and kernel bugs, particularly for static PIE. */ if (result == NULL) - { - long int ret; - int prot = PROT_READ | PROT_WRITE; - int flags = MAP_PRIVATE | MAP_ANONYMOUS; -#ifdef __NR_mmap2 - ret = MMAP_CALL_INTERNAL (mmap2, 0, size, prot, flags, -1, 0); -#else - ret = MMAP_CALL_INTERNAL (mmap, 0, size, prot, flags, -1, 0); -#endif - if (INTERNAL_SYSCALL_ERROR_P (ret)) - result = NULL; - else - result = (void *) ret; - } + result = _dl_early_mmap (size); return result; } diff --git a/sysdeps/unix/sysv/linux/dl-early_mmap.h b/sysdeps/unix/sysv/linux/dl-early_mmap.h new file mode 100644 index 0000000000..1d83daa6a6 --- /dev/null +++ b/sysdeps/unix/sysv/linux/dl-early_mmap.h @@ -0,0 +1,41 @@ +/* Early anonymous mmap for ld.so, before self-relocation. Linux version. + Copyright (C) 2022-2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef DL_EARLY_MMAP_H +#define DL_EARLY_MMAP_H + +#include + +static inline __attribute__ ((always_inline)) void * +_dl_early_mmap (size_t size) +{ + long int ret; + int prot = PROT_READ | PROT_WRITE; + int flags = MAP_PRIVATE | MAP_ANONYMOUS; +#ifdef __NR_mmap2 + ret = MMAP_CALL_INTERNAL (mmap2, 0, size, prot, flags, -1, 0); +#else + ret = MMAP_CALL_INTERNAL (mmap, 0, size, prot, flags, -1, 0); +#endif + if (INTERNAL_SYSCALL_ERROR_P (ret)) + return NULL; + else + return (void *) ret; +} + +#endif /* DL_EARLY_MMAP_H */ From patchwork Sun Feb 2 21:13:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105885 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 5B0E73858C35 for ; Sun, 2 Feb 2025 21:24:14 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 5B0E73858C35 Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=ZXd1CPkw X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by sourceware.org (Postfix) with ESMTP id E47E93858C41 for ; Sun, 2 Feb 2025 21:13:57 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org E47E93858C41 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org E47E93858C41 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530838; cv=none; b=nQok70+mykaGKfCWxF0DBvQ4kTtsk4Tse+YVfndLEUsctZ25nDoLbhSy7mQy+eMWAcQoYcDLYd8AnSu8QTCukTaPMT0ovG/dzf8+vt1Iv7+HeUvjwmleJQzSQCjdpDc4e/tGtHJ/DytxboQLN92d1dUIS4epaNGAMPh2F1fgeu8= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530838; c=relaxed/simple; bh=xFfJ4oKfuorKepFa0dxk9c2+Li7T5TJkwZZ4teIeL8I=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=t9+NsZX9s0e+dZNcwUGaZzsC1fe0dLs1a9/oMO9aIOsFGAqiBB1D7mgPe8TgFYN3exlxeXibuBL4NmVv9CjBCciwrInhM0jzDdx8uEsgbGn1yK6bHZyXQzNZzsCaO2K+uyPOkuTm6PYFt6vlX4CKkDuYLNqHdAK0D2Y3Z5GdFUw= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org E47E93858C41 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530837; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=06FNq5tmo9Rgqf8AIIOGalaFvEFqQp81LWKaXQiK16M=; b=ZXd1CPkwzynzn9G5BrO0cNa2AiLAiQdkPPMXaYetbK0cUfnEFAWURVZ7CPeowdMaGoMLpl HlvwKkabiB/00KEe/6jm+5MHqQgPcg/Yd5lQfKd1MAo/G03Xv44YVQBerqboXEgvWgFySD uBrheBajcBJDJDOSi3UCpUNZMPfYz90= Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-616-hU6H__XJNVq_zLFyNBKlww-1; Sun, 02 Feb 2025 16:13:56 -0500 X-MC-Unique: hU6H__XJNVq_zLFyNBKlww-1 X-Mimecast-MFC-AGG-ID: hU6H__XJNVq_zLFyNBKlww Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3CC1319560B7 for ; Sun, 2 Feb 2025 21:13:55 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id BEF81180097D for ; Sun, 2 Feb 2025 21:13:53 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 11/14] elf: Implement a region-based protected memory allocator In-Reply-To: Message-ID: References: X-From-Line: d2df2f79eb2e61e6666f7ce7f3f8d3de90df4674 Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:13:50 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: i_3WqTTUo0LuTSTvMp69qjCa6XjW5kkkoA6_HRE9On0_1738530835 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-10.8 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org Use it to keep the link maps read-only most of the time. The path to the link maps is not yet protected (they still come from GL (dl_nns)). However, direct overwrites over l_info (l_info[DT_FINI] in particular) are blocked. In _dl_new_object, do not assume that the allocator provides zeroed memory. --- elf/Makefile | 12 + elf/dl-close.c | 20 +- elf/dl-libc_freeres.c | 5 + elf/dl-load.c | 33 ++- elf/dl-object.c | 24 +- elf/dl-open.c | 18 ++ elf/dl-protmem-internal.h | 66 ++++++ elf/dl-protmem.c | 425 +++++++++++++++++++++++++++++++++++ elf/dl-protmem.h | 93 ++++++++ elf/dl-protmem_bootstrap.h | 9 +- elf/rtld.c | 10 + elf/tst-dl-protmem.c | 350 +++++++++++++++++++++++++++++ elf/tst-relro-linkmap-mod1.c | 42 ++++ elf/tst-relro-linkmap-mod2.c | 2 + elf/tst-relro-linkmap-mod3.c | 2 + elf/tst-relro-linkmap.c | 112 +++++++++ include/link.h | 3 + sysdeps/generic/ldsodefs.h | 8 +- 18 files changed, 1216 insertions(+), 18 deletions(-) create mode 100644 elf/dl-protmem-internal.h create mode 100644 elf/dl-protmem.c create mode 100644 elf/dl-protmem.h create mode 100644 elf/tst-dl-protmem.c create mode 100644 elf/tst-relro-linkmap-mod1.c create mode 100644 elf/tst-relro-linkmap-mod2.c create mode 100644 elf/tst-relro-linkmap-mod3.c create mode 100644 elf/tst-relro-linkmap.c diff --git a/elf/Makefile b/elf/Makefile index 1d93993241..06b1f1fae5 100644 --- a/elf/Makefile +++ b/elf/Makefile @@ -72,6 +72,7 @@ dl-routines = \ dl-open \ dl-origin \ dl-printf \ + dl-protmem \ dl-reloc \ dl-runtime \ dl-scope \ @@ -117,6 +118,7 @@ elide-routines.os = \ # These object files are only included in the dynamically-linked libc. shared-only-routines = \ + dl-protmem \ libc-dl-profile \ libc-dl-profstub \ libc-dl_find_object \ @@ -529,11 +531,13 @@ tests-internal += \ tst-audit19a \ tst-create_format1 \ tst-dl-hwcaps_split \ + tst-dl-protmem \ tst-dl_find_object \ tst-dl_find_object-threads \ tst-dlmopen2 \ tst-hash-collision3 \ tst-ptrguard1 \ + tst-relro-linkmap \ tst-stackguard1 \ tst-tls-surplus \ tst-tls3 \ @@ -976,6 +980,9 @@ modules-names += \ tst-recursive-tlsmod13 \ tst-recursive-tlsmod14 \ tst-recursive-tlsmod15 \ + tst-relro-linkmap-mod1 \ + tst-relro-linkmap-mod2 \ + tst-relro-linkmap-mod3 \ tst-relsort1mod1 \ tst-relsort1mod2 \ tst-ro-dynamic-mod \ @@ -3393,3 +3400,8 @@ $(objpfx)tst-nolink-libc-2: $(objpfx)tst-nolink-libc.o -Wl,--dynamic-linker=$(objpfx)ld.so $(objpfx)tst-nolink-libc-2.out: $(objpfx)tst-nolink-libc-2 $(objpfx)ld.so $< > $@ 2>&1; $(evaluate-test) + +LDFLAGS-tst-relro-linkmap = -Wl,-E +$(objpfx)tst-relro-linkmap: $(objpfx)tst-relro-linkmap-mod1.so +$(objpfx)tst-relro-linkmap.out: $(objpfx)tst-dlopenfailmod1.so \ + $(objpfx)tst-relro-linkmap-mod2.so $(objpfx)tst-relro-linkmap-mod3.so diff --git a/elf/dl-close.c b/elf/dl-close.c index 3169ad03bd..4865c3560c 100644 --- a/elf/dl-close.c +++ b/elf/dl-close.c @@ -33,6 +33,7 @@ #include #include #include +#include #include @@ -130,6 +131,9 @@ _dl_close_worker (struct link_map *map, bool force) return; } + /* Actual changes are about to happen. */ + _dl_protmem_begin (); + Lmid_t nsid = map->l_ns; struct link_namespaces *ns = &GL(dl_ns)[nsid]; @@ -260,7 +264,10 @@ _dl_close_worker (struct link_map *map, bool force) /* Call its termination function. Do not do it for half-cooked objects. Temporarily disable exception - handling, so that errors are fatal. */ + handling, so that errors are fatal. + + Link maps are writable during this call, but avoiding + that is probably too costly. */ if (imap->l_rw->l_init_called) _dl_catch_exception (NULL, _dl_call_fini, imap); @@ -360,8 +367,11 @@ _dl_close_worker (struct link_map *map, bool force) newp = (struct r_scope_elem **) malloc (new_size * sizeof (struct r_scope_elem *)); if (newp == NULL) - _dl_signal_error (ENOMEM, "dlclose", NULL, - N_("cannot create scope list")); + { + _dl_protmem_end (); + _dl_signal_error (ENOMEM, "dlclose", NULL, + N_("cannot create scope list")); + } } /* Copy over the remaining scope elements. */ @@ -709,7 +719,7 @@ _dl_close_worker (struct link_map *map, bool force) if (imap == GL(dl_initfirst)) GL(dl_initfirst) = NULL; - free (imap); + _dl_free_object (imap); } } @@ -758,6 +768,8 @@ _dl_close_worker (struct link_map *map, bool force) } dl_close_state = not_pending; + + _dl_protmem_end (); } diff --git a/elf/dl-libc_freeres.c b/elf/dl-libc_freeres.c index e8bd7a4b98..093724b765 100644 --- a/elf/dl-libc_freeres.c +++ b/elf/dl-libc_freeres.c @@ -18,6 +18,7 @@ #include #include +#include static bool free_slotinfo (struct dtv_slotinfo_list **elemp) @@ -52,6 +53,10 @@ __rtld_libc_freeres (void) struct link_map *l; struct r_search_path_elem *d; + /* We are about to write to link maps. This is not paired with + _dl_protmem_end because the process is going away anyway. */ + _dl_protmem_begin (); + /* Remove all search directories. */ d = GL(dl_all_dirs); while (d != GLRO(dl_init_all_dirs)) diff --git a/elf/dl-load.c b/elf/dl-load.c index 06cbedd8a7..d9ddd6e0a3 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -34,6 +34,7 @@ #include #include #include +#include /* Type for the buffer we put the ELF header and hopefully the program header. This buffer does not really have to be too large. In most @@ -962,7 +963,8 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, free (l->l_libname); if (l != NULL && l->l_phdr_allocated) free ((void *) l->l_phdr); - free (l); + if (l != NULL) + _dl_free_object (l); free (realname); _dl_signal_error (errval, name, NULL, errstring); } @@ -2214,6 +2216,22 @@ add_path (struct add_path_state *p, const struct r_search_path_struct *sps, } } +/* Wrap cache_rpath to unprotect memory first if necessary. */ +static bool +cache_rpath_unprotect (struct link_map *l, + struct r_search_path_struct *sp, + int tag, + const char *what, + bool *unprotected) +{ + if (sp->dirs == NULL && !*unprotected) + { + _dl_protmem_begin (); + *unprotected = true; + } + return cache_rpath (l, sp, tag, what); +} + void _dl_rtld_di_serinfo (struct link_map *loader, Dl_serinfo *si, bool counting) { @@ -2230,6 +2248,7 @@ _dl_rtld_di_serinfo (struct link_map *loader, Dl_serinfo *si, bool counting) .si = si, .allocptr = (char *) &si->dls_serpath[si->dls_cnt] }; + bool unprotected = false; # define add_path(p, sps, flags) add_path(p, sps, 0) /* XXX */ @@ -2242,7 +2261,8 @@ _dl_rtld_di_serinfo (struct link_map *loader, Dl_serinfo *si, bool counting) struct link_map *l = loader; do { - if (cache_rpath (l, &l->l_rpath_dirs, DT_RPATH, "RPATH")) + if (cache_rpath_unprotect (l, &l->l_rpath_dirs, DT_RPATH, + "RPATH", &unprotected)) add_path (&p, &l->l_rpath_dirs, XXX_RPATH); l = l->l_loader; } @@ -2253,7 +2273,8 @@ _dl_rtld_di_serinfo (struct link_map *loader, Dl_serinfo *si, bool counting) { l = GL(dl_ns)[LM_ID_BASE]._ns_loaded; if (l != NULL && l->l_type != lt_loaded && l != loader) - if (cache_rpath (l, &l->l_rpath_dirs, DT_RPATH, "RPATH")) + if (cache_rpath_unprotect (l, &l->l_rpath_dirs, DT_RPATH, + "RPATH", &unprotected)) add_path (&p, &l->l_rpath_dirs, XXX_RPATH); } } @@ -2262,7 +2283,8 @@ _dl_rtld_di_serinfo (struct link_map *loader, Dl_serinfo *si, bool counting) add_path (&p, &__rtld_env_path_list, XXX_ENV); /* Look at the RUNPATH information for this binary. */ - if (cache_rpath (loader, &loader->l_runpath_dirs, DT_RUNPATH, "RUNPATH")) + if (cache_rpath_unprotect (loader, &loader->l_runpath_dirs, DT_RUNPATH, + "RUNPATH", &unprotected)) add_path (&p, &loader->l_runpath_dirs, XXX_RUNPATH); /* XXX @@ -2277,4 +2299,7 @@ _dl_rtld_di_serinfo (struct link_map *loader, Dl_serinfo *si, bool counting) /* Count the struct size before the string area, which we didn't know before we completed dls_cnt. */ si->dls_size += (char *) &si->dls_serpath[si->dls_cnt] - (char *) si; + + if (unprotected) + _dl_protmem_end (); } diff --git a/elf/dl-object.c b/elf/dl-object.c index db9c635c7e..b28609fa27 100644 --- a/elf/dl-object.c +++ b/elf/dl-object.c @@ -21,6 +21,7 @@ #include #include #include +#include #include @@ -89,15 +90,19 @@ _dl_new_object (char *realname, const char *libname, int type, # define audit_space 0 #endif - new = calloc (sizeof (*new) - + sizeof (struct link_map_private *) - + sizeof (*newname) + libname_len, 1); + size_t l_size = (sizeof (*new) + + sizeof (struct link_map_private *) + + sizeof (*newname) + libname_len); + + new = _dl_protmem_allocate (l_size); if (new == NULL) return NULL; + memset (new, 0, sizeof (*new)); + new->l_size = l_size; new->l_rw = calloc (1, sizeof (*new->l_rw) + audit_space); if (new->l_rw == NULL) { - free (new); + _dl_protmem_free (new, l_size); return NULL; } @@ -107,7 +112,7 @@ _dl_new_object (char *realname, const char *libname, int type, new->l_libname = newname = (struct libname_list *) (new->l_symbolic_searchlist.r_list + 1); newname->name = (char *) memcpy (newname + 1, libname, libname_len); - /* newname->next = NULL; We use calloc therefore not necessary. */ + newname->next = NULL; newname->dont_free = 1; /* When we create the executable link map, or a VDSO link map, we start @@ -142,12 +147,9 @@ _dl_new_object (char *realname, const char *libname, int type, #ifdef SHARED for (unsigned int cnt = 0; cnt < naudit; ++cnt) - /* No need to initialize bindflags due to calloc. */ link_map_audit_state (new, cnt)->cookie = (uintptr_t) new; #endif - /* new->l_global = 0; We use calloc therefore not necessary. */ - /* Use the 'l_scope_mem' array by default for the 'l_scope' information. If we need more entries we will allocate a large array dynamically. */ @@ -266,3 +268,9 @@ _dl_new_object (char *realname, const char *libname, int type, return new; } + +void +_dl_free_object (struct link_map *l) +{ + _dl_protmem_free (l, l->l_size); +} diff --git a/elf/dl-open.c b/elf/dl-open.c index 85d6bbc7c2..c73c44ff15 100644 --- a/elf/dl-open.c +++ b/elf/dl-open.c @@ -37,6 +37,7 @@ #include #include #include +#include #include @@ -172,6 +173,8 @@ add_to_global_update (struct link_map *new) { struct link_namespaces *ns = &GL (dl_ns)[new->l_ns]; + _dl_protmem_begin (); + /* Now add the new entries. */ unsigned int new_nlist = ns->_ns_main_searchlist->r_nlist; for (unsigned int cnt = 0; cnt < new->l_searchlist.r_nlist; ++cnt) @@ -202,6 +205,8 @@ add_to_global_update (struct link_map *new) atomic_write_barrier (); ns->_ns_main_searchlist->r_nlist = new_nlist; + + _dl_protmem_end (); } /* Search link maps in all namespaces for the DSO that contains the object at @@ -515,6 +520,11 @@ dl_open_worker_begin (void *a) const char *file = args->file; int mode = args->mode; + /* Prepare for link map updates. If dl_open_worker below returns + normally, a matching _dl_protmem_end call is performed there. On + an exception, the handler in the caller has to perform it. */ + _dl_protmem_begin (); + /* The namespace ID is now known. Keep track of whether libc.so was already loaded, to determine whether it is necessary to call the early initialization routine (or clear libc_map on error). */ @@ -778,6 +788,10 @@ dl_open_worker (void *a) _dl_signal_exception (err, &ex, NULL); } + /* Make state read-only before running user code in ELF + constructors. */ + _dl_protmem_end (); + if (!args->worker_continue) return; @@ -927,6 +941,10 @@ no more namespaces available for dlmopen()")); the flag here. */ } + /* Due to the exception, we did not end the protmem transaction + before. */ + _dl_protmem_end (); + /* Release the lock. */ __rtld_lock_unlock_recursive (GL(dl_load_lock)); diff --git a/elf/dl-protmem-internal.h b/elf/dl-protmem-internal.h new file mode 100644 index 0000000000..278581d5d9 --- /dev/null +++ b/elf/dl-protmem-internal.h @@ -0,0 +1,66 @@ +/* Protected memory allocator for ld.so. Internal interfaces. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* These declarations are needed by , which + has to be inlined into _dl_start. */ + +#ifndef DL_PROTMEM_INTERNAL_H +#define DL_PROTMEM_INTERNAL_H + +/* Minimum chunk size. Used to preserve alignment. */ +enum { _dlpm_chunk_minimal_size = 8 }; + +/* The initial allocation covers about 150 link maps, which should be + enough for most programs. */ +#if __WORDSIZE == 32 +# define DL_PROTMEM_INITIAL_REGION_SIZE 131072 +#else +# define DL_PROTMEM_INITIAL_REGION_SIZE 262144 +#endif + +#define DL_PROTMEM_REGION_COUNT 12 + +/* Struct tag denoting freelist entries. */ +struct dl_protmem_freelist_chunk; + +/* Global state for the protected memory allocator. */ +struct dl_protmem_state +{ + /* GLRO (dl_protmem) points to this field. */ + struct rtld_protmem protmem + __attribute__ ((__aligned__ (_dlpm_chunk_minimal_size))); + + /* Pointers to mmap-allocated regions. For index i, the size of the + allocation is DL_PROTMEM_INITIAL_ALLOCATION << i. The space of + the combined regions is sufficient for hundreds of thousands of + link maps, so the dynamic linker runs into scalability issues + well before it is exhausted. */ + void *regions[DL_PROTMEM_REGION_COUNT]; + + /* List of unused allocation for each region, in increasing address + order. See _dlpm_chunk_size for how the freed chunk size is + encoded. */ + struct dl_protmem_freelist_chunk *freelist[DL_PROTMEM_REGION_COUNT]; + + /* One cached free chunk, used to avoid scanning freelist for + adjacent deallocations. Tracking these chunks per region avoids + accidental merging across regions. */ + struct dl_protmem_freelist_chunk *pending_free[DL_PROTMEM_REGION_COUNT]; +}; + +#endif /* DL_PROTMEM_INTERNAL_H */ diff --git a/elf/dl-protmem.c b/elf/dl-protmem.c new file mode 100644 index 0000000000..453657b3c2 --- /dev/null +++ b/elf/dl-protmem.c @@ -0,0 +1,425 @@ +/* Protected memory allocator for ld.so. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include + +#include +#include + +#include +#include +#include + +/* Nesting counter for _dl_protmem_begin/_dl_protmem_end. This is + primaryly required because we may have a call sequence dlopen, + malloc, dlopen. Without the counter, _dl_protmem_end in the inner + dlopen would make a link map that is still being initialized + read-only. */ +static unsigned int _dl_protmem_begin_count; + +static inline struct dl_protmem_state * +_dl_protmem_state (void) +{ + return ((void *) GLRO (dl_protmem) + - offsetof (struct dl_protmem_state, protmem)); +} + +/* Address of a chunk on the free list. This is an abstract pointer, + never to be dereferenced explictly. Use the accessor functions + below instead. + + Metadata layout: The first word is the pointer to the next chunk, + except the that the lowest bit (unused due to alignment) is used as + a flag. If it is 1, the chunk size is the minimal size, and the + size is not stored separately. If the flag is 0, the size is + stored in the second metadata word. */ +typedef struct dl_protmem_freelist_chunk *chunk; + +/* Returns the size of a chunk on the free list whose start address is + FREEPTR. The size includes the metadata. */ +static inline size_t +_dlpm_chunk_size (chunk ptr) +{ + uintptr_t *p = (uintptr_t *)ptr; + if (*p & 1) + return _dlpm_chunk_minimal_size; + else + return p[1]; +} + +/* Returns the address of the next free list element. */ +static inline chunk +_dlpm_chunk_next (chunk ptr) +{ + uintptr_t *p = (uintptr_t *)ptr; + /* Mask away the size bit. */ + return (chunk) (*p & -2); +} + +static inline void +_dlpm_chunk_set_next (chunk ptr, chunk newnext) +{ + /* Preserve the value of the size bit. */ + uintptr_t *p = (uintptr_t *)ptr; + *p = (uintptr_t) newnext | (*p & 1); +} + +/* Creates a new freelist chunk at PTR, with NEXT as the next chunk, + and SIZE as the size of this chunk (which includes the + metadata). Returns PTR. */ +static inline chunk +_dlpm_chunk_make (chunk ptr, chunk next, size_t size) +{ + uintptr_t *p = (uintptr_t *)ptr; + if (size <= _dlpm_chunk_minimal_size) + /* Compressed size. */ + *p = (uintptr_t) next | 1; + else + { + p[0] = (uintptr_t) next; + p[1] = size; + } + return ptr; +} + +/* Return true if PTR2 comes immediately after PTR1 in memory. PTR2 + can be NULL. */ +static inline bool +_dlpm_chunk_adjancent (chunk ptr1, chunk ptr2) +{ + return (uintptr_t) ptr2 == (uintptr_t) ptr1 + _dlpm_chunk_size (ptr1); +} + +/* Put the pending allocation on the free list. */ +static void +_dlpm_free_pending (struct dl_protmem_state *state, unsigned int region) +{ + chunk pending = state->pending_free[region]; + state->pending_free[region] = NULL; + + /* The current chunk pointer. In the while loop below, coalescing + potentially happens at the end of this chunk, so that the chunk + address does not change. */ + chunk current = state->freelist[region]; + + /* Special cases before loop start. */ + + if (current == NULL) + { + /* The freelist is empty. Nothing to coalesce. */ + state->freelist[region] = pending; + return; + } + + /* During the loop below, this merge is handled as part of the next + chunk processing. */ + if (pending < current) + { + /* The new chunk will be first on the freelist. */ + state->freelist[region] = pending; + + /* See if we can coalesce. */ + if (_dlpm_chunk_adjancent (pending, current)) + { + chunk new_next = _dlpm_chunk_next (current); + size_t new_size = (_dlpm_chunk_size (pending) + + _dlpm_chunk_size (current)); + _dlpm_chunk_make (pending, new_next, new_size); + } + else + _dlpm_chunk_set_next (pending, current); + return; + } + + while (true) + { + chunk next = _dlpm_chunk_next (current); + if (_dlpm_chunk_adjancent (current, pending)) + { + /* We can coalesce. See if this completely fills a gap. */ + if (_dlpm_chunk_adjancent (pending, next)) + { + /* Merge three chunks. */ + chunk new_next = _dlpm_chunk_next (next); + size_t new_size = (_dlpm_chunk_size (current) + + _dlpm_chunk_size (pending) + + _dlpm_chunk_size (next)); + /* The address of the current chunk does not change, so + the next pointer leading to it remains valid. */ + _dlpm_chunk_make (current, new_next, new_size); + } + else + { + /* Merge two chunks. */ + size_t new_size = (_dlpm_chunk_size (current) + + _dlpm_chunk_size (pending)); + /* The current chunk pointer remains unchanged. */ + _dlpm_chunk_make (current, next, new_size); + } + break; + } + if (next == NULL) + { + /* New last chunk on freelist. */ + _dlpm_chunk_set_next (current, pending); + break; + } + if (pending < next) + { + /* This is the right spot on the freelist. */ + _dlpm_chunk_set_next (current, pending); + + /* See if we can coalesce with the next chunk. */ + if (_dlpm_chunk_adjancent (pending, next)) + { + chunk new_next = _dlpm_chunk_next (next); + size_t new_size = (_dlpm_chunk_size (pending) + + _dlpm_chunk_size (next)); + _dlpm_chunk_make (pending, new_next, new_size); + } + else + _dlpm_chunk_set_next (pending, next); + break; + } + current = next; + } +} + +/* Returns the region index for the pointer. Terminates the process + if PTR is not on the heap. */ +static unsigned int +_dlpm_find_region (struct dl_protmem_state *state, void *ptr) +{ + /* Find the region in which the pointer is located. */ + size_t region_size = DL_PROTMEM_INITIAL_REGION_SIZE; + for (unsigned int i = 0; i < array_length (state->regions); ++i) + { + if (ptr >= state->regions[i] && ptr < state->regions[i] + region_size) + return i; + region_size *= 2; + } + + _dl_fatal_printf ("\ +Fatal glibc error: Protected memory allocation not found\n"); +} + +void +_dl_protmem_init (void) +{ + struct dl_protmem_state *state = _dl_protmem_state (); + state->regions[0] = state; + /* The part of the region after the allocator state (with the + embeded protected memory area) is unused. */ + state->freelist[0] = (chunk) (state + 1); + void *initial_region_end = (void *) state + DL_PROTMEM_INITIAL_REGION_SIZE; + _dlpm_chunk_make (state->freelist[0], NULL, + initial_region_end - (void *) state->freelist[0]); + _dl_protmem_begin_count = 1; +} + +void +_dl_protmem_begin (void) +{ + if (_dl_protmem_begin_count++ != 0) + /* Already unprotected. */ + return; + + struct dl_protmem_state *state = _dl_protmem_state (); + size_t region_size = DL_PROTMEM_INITIAL_REGION_SIZE; + for (unsigned int i = 0; i < array_length (state->regions); ++i) + if (state->regions[i] != NULL) + { + if (__mprotect (state->regions[i], region_size, + PROT_READ | PROT_WRITE) != 0) + _dl_signal_error (ENOMEM, NULL, NULL, + "Cannot make protected memory writable"); + region_size *= 2; + } +} + +void +_dl_protmem_end (void) +{ + if (--_dl_protmem_begin_count > 0) + return; + + struct dl_protmem_state *state = _dl_protmem_state (); + size_t region_size = DL_PROTMEM_INITIAL_REGION_SIZE; + for (unsigned int i = 0; i < array_length (state->regions); ++i) + if (state->regions[i] != NULL) + /* Ignore errors here because we can continue running with + read-write memory, with reduced hardening. */ + (void) __mprotect (state->regions[i], region_size, PROT_READ); +} + +void * +_dl_protmem_allocate (size_t requested_size) +{ + /* Round up the size to the next multiple of 8, to preserve chunk + alignment. */ + { + size_t adjusted_size = roundup (requested_size, _dlpm_chunk_minimal_size); + if (adjusted_size < requested_size) + return NULL; /* Overflow. */ + requested_size = adjusted_size; + } + + struct dl_protmem_state *state = _dl_protmem_state (); + + /* Try to find an exact match among the pending chunks. */ + for (unsigned int i = 0; i < array_length (state->regions); ++i) + { + chunk pending = state->pending_free[i]; + if (pending == NULL) + continue; + size_t pending_size = _dlpm_chunk_size (pending); + if (pending_size == requested_size) + { + state->pending_free[i] = NULL; + return pending; + } + } + + /* Remove all pending allocations. */ + for (unsigned int i = 0; i < array_length (state->regions); ++i) + if (state->pending_free[i] != NULL) + _dlpm_free_pending (state, i); + + /* This points to the previous chunk of the best chunk found so far, + or the root of the freelist. This place needs to be updated to + remove the best chunk from the freelist. */ + chunk best_previous_p = NULL; + size_t best_p_size = -1; + + /* Best-fit search along the free lists. */ + for (unsigned int i = 0; i < array_length (state->regions); ++i) + if (state->freelist[i] != NULL) + { + /* Use the head pointer of the list as the next pointer. + The missing size field is not updated below. */ + chunk last_p = (chunk) &state->freelist[i]; + chunk p = state->freelist[i]; + while (true) + { + size_t candidate_size = _dlpm_chunk_size (p); + chunk next_p = _dlpm_chunk_next (p); + if (candidate_size == requested_size) + { + /* Perfect fit. No further search needed. + Remove this chunk from the free list. */ + _dlpm_chunk_set_next (last_p, next_p); + return p; + } + if (candidate_size > requested_size + && candidate_size < best_p_size) + /* Chunk with a better usable size. */ + { + best_previous_p = last_p; + best_p_size = candidate_size; + } + if (next_p == NULL) + break; + last_p = p; + p = next_p; + } + } + + if (best_previous_p == NULL) + { + /* No usable chunk found. Grow the heap. */ + size_t region_size = DL_PROTMEM_INITIAL_REGION_SIZE; + for (unsigned int i = 0; i < array_length (state->regions); ++i) + { + if (state->regions[i] == NULL && region_size >= requested_size) + { + void *ptr = __mmap (NULL, region_size, PROT_READ | PROT_WRITE, + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + if (ptr == MAP_FAILED) + return NULL; + state->regions[i] = ptr; + if (region_size == requested_size) + /* Perfect fit: the entire region serves as the allocation. */ + return ptr; + + /* Create a free list with one entry for the entire region. */ + state->freelist[i] = _dlpm_chunk_make (ptr, NULL, region_size); + best_previous_p = (chunk) &state->freelist[i]; + best_p_size = region_size; + + /* Chunk is split below. */ + break; + } + region_size *= 2; + } + + /* All regions have been exhausted. */ + if (best_previous_p == NULL) + return NULL; + } + + /* Split the chunk. */ + chunk p = _dlpm_chunk_next (best_previous_p); + void *p_end = (void *) p + best_p_size; /* Memory after this chunk. */ + chunk p_next = _dlpm_chunk_next (p); /* Following chunk on freelist. */ + void *remaining = (void *) p + requested_size; /* Place of the new chunk. */ + /* Replace the chunk on the free list with its remainder. */ + _dlpm_chunk_set_next (best_previous_p, + _dlpm_chunk_make (remaining, + p_next, p_end - remaining)); + return p; +} + +void +_dl_protmem_free (void *ptr, size_t requested_size) +{ + requested_size = roundup (requested_size, _dlpm_chunk_minimal_size); + + struct dl_protmem_state *state = _dl_protmem_state (); + unsigned int region = _dlpm_find_region (state, ptr); + + { + chunk pending = state->pending_free[region]; + if (pending != NULL) + { + /* First try merging with the old allocation. */ + if (_dlpm_chunk_adjancent (pending, ptr)) + { + /* Extend the existing pending chunk. The start address does + not change. */ + _dlpm_chunk_make (pending, NULL, + _dlpm_chunk_size (pending) + requested_size); + return; + } + if (_dlpm_chunk_adjancent (ptr, pending)) + { + /* Create a new chunk that has the exsting chunk at the end. */ + state->pending_free[region] + = _dlpm_chunk_make (ptr, NULL, + requested_size + _dlpm_chunk_size (pending)); + return; + } + + /* Merging did not work out. Get rid of the old pending + allocation. */ + _dlpm_free_pending (state, region); + } + } + + /* No pending allocation at this point. Create new free chunk. */ + state->pending_free[region] = _dlpm_chunk_make (ptr, NULL, requested_size); +} diff --git a/elf/dl-protmem.h b/elf/dl-protmem.h new file mode 100644 index 0000000000..32182053a5 --- /dev/null +++ b/elf/dl-protmem.h @@ -0,0 +1,93 @@ +/* Protected memory allocator for ld.so. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* The protected memory allocation manages the memory for the GLPM + variables (in shared builds), and for additional memory managed by + _dl_protmem_allocate and _dl_protmem_free. + + After a call to _dl_protmem_begin and until the matching call to + _dl_protmem_end, the GLPM variables and memory allocated using + _dl_protmem_allocate is writable. _dl_protmem_begin and + _dl_protmem_end calls can be nested. In this case, only the + outermost _dl_protmem_end call makes memory read-only. */ + +#ifndef DL_PROTMEM_H +#define DL_PROTMEM_H + +#include + +#ifdef SHARED +/* Must be called after _dl_allocate_rtld_map and before any of the + functions below. Implies the first _dl_protmem_begin call. */ +void _dl_protmem_init (void) attribute_hidden; + +/* Frees memory allocated using _dl_protmem_allocate. The passed size + must be the same that was passed to _dl_protmem_allocate. + Protected memory must be writable when this function is called. */ +void _dl_protmem_free (void *ptr, size_t size) attribute_hidden; + +/* Allocate protected memory of SIZE bytes. Returns NULL on + allocation failure. Protected memory must be writable when this + function is called. The allocation will be writable and contains + unspecified bytes (similar to malloc). */ +void *_dl_protmem_allocate (size_t size) attribute_hidden + __attribute_malloc__ __attribute_alloc_size__ ((1)) + __attr_dealloc (_dl_protmem_free, 1); + +/* _dl_protmem_begin makes protected memory writable, and + _dl_protmem_end makes it read-only again. Calls to these functions + must be paired. Within this region, protected memory is writable. + See the initial description above. + + Failure to make memory writable in _dl_protmem_end is communicated + via an ld.so exception, typically resulting in a dlopen failure. + This can happen after a call to fork if memory overcommitment is + disabled. */ +void _dl_protmem_begin (void) attribute_hidden; +void _dl_protmem_end (void) attribute_hidden; + +#else /*!SHARED */ +/* The protected memory allocator does not exist for static builds. + Use malloc directly. */ + +#include + +static inline void * +_dl_protmem_allocate (size_t size) +{ + return calloc (size, 1); +} + +static inline void +_dl_protmem_free (void *ptr, size_t size) +{ + free (ptr); +} + +static inline void +_dl_protmem_begin (void) +{ +} + +static inline void +_dl_protmem_end (void) +{ +} +#endif /* !SHARED */ + +#endif /* DL_PROTMEM_H */ diff --git a/elf/dl-protmem_bootstrap.h b/elf/dl-protmem_bootstrap.h index a2fc267a2d..fef90bdf0a 100644 --- a/elf/dl-protmem_bootstrap.h +++ b/elf/dl-protmem_bootstrap.h @@ -17,6 +17,7 @@ . */ #include +#include /* Return a pointer to the protected memory area, or NULL if allocation fails. This function is called before self-relocation, @@ -25,5 +26,11 @@ static inline __attribute__ ((always_inline)) struct rtld_protmem * _dl_protmem_bootstrap (void) { - return _dl_early_mmap (sizeof (struct rtld_protmem)); + /* The protected memory area is nested within the bootstrap + allocation. */ + struct dl_protmem_state *ptr + = _dl_early_mmap (DL_PROTMEM_INITIAL_REGION_SIZE); + if (ptr == NULL) + return NULL; + return &ptr->protmem; } diff --git a/elf/rtld.c b/elf/rtld.c index de9e87cd0b..791a875cce 100644 --- a/elf/rtld.c +++ b/elf/rtld.c @@ -54,6 +54,7 @@ #include #include #include +#include #include @@ -463,6 +464,10 @@ _dl_start_final (void *arg, struct dl_start_final_info *info) if (GLRO (dl_protmem) == NULL) _dl_fatal_printf ("Fatal glibc error: Cannot allocate link map\n"); + /* Set up the protected memory allocator, transferring the rtld link + map allocation in GLRO (dl_rtld_map). */ + _dl_protmem_init (); + __rtld_malloc_init_stubs (); /* Do not use an initializer for these members because it would @@ -2353,6 +2358,11 @@ dl_main (const ElfW(Phdr) *phdr, _dl_relocate_object might need to call `mprotect' for DT_TEXTREL. */ _dl_sysdep_start_cleanup (); + /* Most of the initialization work has happened by this point, and + it should not be necessary to make the link maps read-write after + this point. */ + _dl_protmem_end (); + /* Notify the debugger all new objects are now ready to go. We must re-get the address since by now the variable might be in another object. */ r = _dl_debug_update (LM_ID_BASE); diff --git a/elf/tst-dl-protmem.c b/elf/tst-dl-protmem.c new file mode 100644 index 0000000000..66064df777 --- /dev/null +++ b/elf/tst-dl-protmem.c @@ -0,0 +1,350 @@ +/* Internal test for the protected memory allocator. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +static int do_test (void); +#include + +/* Tracking allocated memory. Allocation granularity is assumed to be + 8 bytes. */ + +/* Lowest level. Covers 65536 * 32 * 8 bytes (24 bit of address space). */ +struct level3 +{ + uint32_t bits[1 << 16]; +}; + +/* Mid-level covers. 20 bits of address space. */ +struct level2 +{ + struct level3 *level2[1 << 20]; +}; + +/* Top level. 20 bits of address space. */ +static struct level2 *level1[1 << 20]; + +/* Byte address to index in level1. */ +static inline unsigned int +level1_index (uintptr_t u) +{ +#if UINTPTR_WIDTH > 44 + return u >> 44; +#else + return 0; +#endif +} + +/* Byte address to index in level1[N]->level2. */ +static inline unsigned int +level2_index (uintptr_t u) +{ + return (u >> 24) & ((1 << 20) - 1); +} + +/* Byte address to index in level1[N]->level2[M]->level3. */ +static inline unsigned int +level3_index (uintptr_t u) +{ + unsigned int a = u >> 3; /* Every 8th byte tracked. */; + return (a >> 5) & ((1 << 16) - 1); +} + +/* Mask for the bit in level3_index. */ +static inline uint32_t +level3_mask (uintptr_t u) +{ + return (uint32_t) 1U << ((u >> 3) & 31); +} + +/* Flip a bit from unset to set. Return false if the bit was already set. */ +static bool +set_unset_bit_at (void *p) +{ + uintptr_t u = (uintptr_t) p; + struct level2 *l2 = level1[level1_index (u)]; + if (l2 == NULL) + { + l2 = xmmap (NULL, sizeof (*l2), PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1); + level1[level1_index (u)] = l2; + } + struct level3 *l3 = l2->level2[level2_index (u)]; + if (l3 == NULL) + { + l3 = xmmap (NULL, sizeof (*l3), PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1); + l2->level2[level2_index (u)] = l3; + } + unsigned int idx = level3_index (u); + uint32_t mask = level3_mask (u); + if (l3->bits[idx] & mask) + return false; + l3->bits[idx] |= mask; + return true; +} + +/* Flip a bit from set to unset. Return false if the bit was already + cleared. */ +static bool +clear_set_bit_at (void *p) +{ + uintptr_t u = (uintptr_t) p; + struct level2 *l2 = level1[level1_index (u)]; + if (l2 == NULL) + return false; + struct level3 *l3 = l2->level2[level2_index (u)]; + if (l3 == NULL) + return false; + unsigned int idx = level3_index (u); + uint32_t mask = level3_mask (u); + if (!(l3->bits[idx] & mask)) + return false; + l3->bits[idx] &= ~mask; + return true; +} + +/* Record an allocation in the bitmap. Errors if the covered bytes + are already allocated. */ +static void +record_allocate (void *p, size_t size) +{ + TEST_VERIFY_EXIT (p != NULL); + TEST_VERIFY_EXIT (size > 0); + if (((uintptr_t) p & 7) != 0) + FAIL_EXIT1 ("unaligned allocation: %p of %zu bytes", p, size); + for (size_t i = 0; i < size; i += 8) + if (!set_unset_bit_at (p + i)) + FAIL_EXIT1 ("already allocated byte %p in %zu-byte allocation at %p" + " (offset %zu)", p + i, size, p, i); +} + +/* Record a deallocation in the bitmap. Errors if the covered bytes + are not allcoated. */ +static void +record_free (void *p, size_t size) +{ + TEST_VERIFY_EXIT (p != NULL); + TEST_VERIFY_EXIT (size > 0); + if (((uintptr_t) p & 7) != 0) + FAIL_EXIT1 ("unaligned free: %p of %zu bytes", p, size); + for (size_t i = 0; i < size; i += 8) + if (!clear_set_bit_at (p + i)) + FAIL_EXIT1 ("already deallocated byte %p in %zu-byte deallocation at %p" + " (offset %zu)", p + i, size, p, i); +} + +/* This hack results in a definition of struct rtld_global_ro and + related data structures. Do this after all the other header + inclusions, to minimize the impact. */ +#define SHARED +#include + +/* Create our own version of GLRO (dl_protmem). */ +static struct rtld_protmem *dl_protmem; +#undef GLRO +#define GLRO(x) x + +#define SHARED +#include +#include +#include /* Avoid direct system call. */ +#include + +/* Return the allocation bit for an address. */ +static bool +bit_at (void *p) +{ + uintptr_t u = (uintptr_t) p; + struct level2 *l2 = level1[level1_index (u)]; + if (l2 == NULL) + return false; + struct level3 *l3 = l2->level2[level2_index (u)]; + if (l3 == NULL) + return false; + unsigned int idx = level3_index (u); + uint32_t mask = level3_mask (u); + return l3->bits[idx] & mask; +} + +/* Assert that SIZE bytes at P are unallocated. */ +static void +check_free_chunk (void *p, size_t size) +{ + if (((uintptr_t) p & 7) != 0) + FAIL_EXIT1 ("unaligned free chunk: %p of %zu bytes", p, size); + for (size_t i = 0; i < size; i += 8) + if (bit_at (p + i)) + FAIL_EXIT1 ("allocated byte %p in free chunk at %p (%zu bytes," + " offset %zu)", p + i, p, size, i); +} + +/* Dump statistics for the allocator regions (freelist length, maximum + free allocation size). If VERBOSE, log the entire freelist. */ +static void +dump_regions (bool verbose) +{ + struct dl_protmem_state *state = _dl_protmem_state (); + for (unsigned int i = 0; i < array_length (state->regions); ++i) + { + if (verbose && state->regions[i] != NULL) + printf (" region %u at %p\n", i, state->regions[i]); + + chunk pending = state->pending_free[i]; + unsigned int count; + unsigned int max_size; + if (pending == NULL) + { + count = 0; + max_size = 0; + } + else + { + count = 1; + max_size = _dlpm_chunk_size (pending); + check_free_chunk (pending, max_size); + if (verbose) + printf (" pending free chunk %p, %u\n", pending, max_size); + } + + uintptr_t last = 0; + for (chunk c = state->freelist[i]; c != NULL; c = _dlpm_chunk_next (c)) + { + ++count; + size_t sz = _dlpm_chunk_size (c); + if (verbose) + printf (" free chunk %p, %zu\n", c, sz); + check_free_chunk (c, sz); + if (sz > max_size) + max_size = sz; + TEST_VERIFY ((uintptr_t) c > last); + last = (uintptr_t) c; + } + + if (count > 0) + { + if (verbose) + printf (" "); + else + printf (" region %u at %p: ", i, state->regions[i]); + printf ("freelist length %u, maximum size %u\n", count, max_size); + } + } +} + + +static int +do_test (void) +{ + dl_protmem = _dl_protmem_bootstrap (); + _dl_protmem_init (); + + /* Perform a random allocations in a loop. */ + srand (1); + { + struct allocation + { + void *ptr; + size_t size; + } allocations[10007] = {}; + for (unsigned int i = 0; i < 20 * 1000; ++i) + { + struct allocation *a + = &allocations[rand () % array_length (allocations)]; + if (a->ptr == NULL) + { + a->size = 8 * ((rand() % 37) + 1); + a->ptr = _dl_protmem_allocate (a->size); + record_allocate (a->ptr, a->size); + /* Clobber the new allocation, in case some metadata still + references it. */ + memset (a->ptr, 0xcc, a->size); + } + else + { + record_free (a->ptr, a->size); + _dl_protmem_free (a->ptr, a->size); + a->ptr = NULL; + a->size = 0; + } + } + + puts ("info: after running test loop"); + dump_regions (false); + + for (unsigned int i = 0; i < array_length (allocations); ++i) + if (allocations[i].ptr != NULL) + { + record_free (allocations[i].ptr, allocations[i].size); + _dl_protmem_free (allocations[i].ptr, allocations[i].size); + } + puts ("info: after post-loop deallocations"); + dump_regions (true); + } + + /* Do a few larger allocations to show that coalescing works. Note + that the first allocation has some metadata in it, so the free + chunk is not an integral power of two. */ + { + void *ptrs[50]; + for (unsigned int i = 0; i < array_length (ptrs); ++i) + { + ptrs[i] = _dl_protmem_allocate (65536); + record_allocate (ptrs[i], 65536); + } + puts ("info: after large allocations"); + dump_regions (true); + for (unsigned int i = 0; i < array_length (ptrs); ++i) + { + record_free (ptrs[i], 65536); + _dl_protmem_free (ptrs[i], 65536); + } + puts ("info: after freeing allocations"); + dump_regions (true); + + ptrs[0] = _dl_protmem_allocate (8); + record_allocate (ptrs[0], 8); + puts ("info: after dummy allocation"); + dump_regions (true); + + record_free (ptrs[0], 8); +#if __GNUC_PREREQ (11, 0) + /* Suppress invalid GCC warning with -O3 (GCC PR 110546): + error: '_dl_protmem_free' called on pointer returned from a + mismatched allocation function [-Werror=mismatched-dealloc] + note: returned from '_dl_protmem_allocate.constprop' */ + DIAG_IGNORE_NEEDS_COMMENT (11, "-Wmismatched-dealloc"); +#endif + _dl_protmem_free (ptrs[0], 8); +#if __GNUC_PREREQ (11, 0) && __OPTIMIZE__ >= 3 + DIAG_POP_NEEDS_COMMENT; +#endif + puts ("info: after dummy deallocation"); + dump_regions (true); + } + + return 0; +} diff --git a/elf/tst-relro-linkmap-mod1.c b/elf/tst-relro-linkmap-mod1.c new file mode 100644 index 0000000000..b91f16f70a --- /dev/null +++ b/elf/tst-relro-linkmap-mod1.c @@ -0,0 +1,42 @@ +/* Module with the checking function for read-only link maps. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include + +/* Export for use by the main program, to avoid copy relocations on + _r_debug. */ +struct r_debug_extended *const r_debug_extended_address + = (struct r_debug_extended *) &_r_debug; + +/* The real definition is in the main program. */ +void +check_relro_link_maps (const char *context) +{ + puts ("error: check_relro_link_maps not interposed"); + _exit (1); +} + +static void __attribute__ ((constructor)) +init (void) +{ + check_relro_link_maps ("ELF constructor (DSO)"); +} + +/* NB: destructor not checked. Memory is writable when they run. */ diff --git a/elf/tst-relro-linkmap-mod2.c b/elf/tst-relro-linkmap-mod2.c new file mode 100644 index 0000000000..f022264ffd --- /dev/null +++ b/elf/tst-relro-linkmap-mod2.c @@ -0,0 +1,2 @@ +/* Same checking as the first module, but loaded via dlopen. */ +#include "tst-relro-linkmap-mod1.c" diff --git a/elf/tst-relro-linkmap-mod3.c b/elf/tst-relro-linkmap-mod3.c new file mode 100644 index 0000000000..b2b7349200 --- /dev/null +++ b/elf/tst-relro-linkmap-mod3.c @@ -0,0 +1,2 @@ +/* No checking possible because the check_relro_link_maps function + from the main program is inaccessible after dlopen. */ diff --git a/elf/tst-relro-linkmap.c b/elf/tst-relro-linkmap.c new file mode 100644 index 0000000000..c07d2e6815 --- /dev/null +++ b/elf/tst-relro-linkmap.c @@ -0,0 +1,112 @@ +/* Verify that link maps are read-only most of the time. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static int do_test (void); +#include + +/* This hack results in a definition of struct rtld_global_ro and + related data structures. Do this after all the other header + inclusions, to minimize the impact. This only works from the main + program due to tests-internal. */ +#define SHARED +#include + +/* Defined in tst-relro-linkmap-mod1.so. */ +extern struct r_debug_extended *const r_debug_extended_address; + +/* Check that link maps are read-only in all namespaces. */ +void +check_relro_link_maps (const char *context) +{ + for (struct r_debug_extended *r = r_debug_extended_address; + r != NULL; r = r->r_next) + for (struct link_map *l = (struct link_map *) r->base.r_map; + l != NULL; l = l->l_next) + { + char *ctx; + + ctx = xasprintf ("%s: link map for %s", context, l->l_name); + support_memprobe_readonly (ctx, l, sizeof (*l)); + free (ctx); + if (false) /* Link map names are currently writable. */ + { + ctx = xasprintf ("%s: link map name for %s", context, l->l_name); + support_memprobe_readonly (ctx, l->l_name, strlen (l->l_name) + 1); + free (ctx); + } + } +} + +static void __attribute__ ((constructor)) +init (void) +{ + check_relro_link_maps ("ELF constructor (main)"); +} + +static void __attribute__ ((destructor)) +deinit (void) +{ + /* _dl_fini does not make link maps writable. */ + check_relro_link_maps ("ELF destructor (main)"); +} + +static int +do_test (void) +{ + check_relro_link_maps ("initial do_test"); + + /* Avoid copy relocations. Do this from the main program because we + need access to internal headers. */ + { + struct rtld_global_ro *ro = xdlsym (RTLD_DEFAULT, "_rtld_global_ro"); + check_relro_link_maps ("after _rtld_global_ro"); + support_memprobe_readonly ("_rtld_global_ro", ro, sizeof (*ro)); + support_memprobe_readonly ("GLPM", ro->_dl_protmem, + sizeof (*ro->_dl_protmem)); + } + support_memprobe_readwrite ("_rtld_global", + xdlsym (RTLD_DEFAULT, "_rtld_global"), + sizeof (struct rtld_global_ro)); + check_relro_link_maps ("after _rtld_global"); + + /* This is supposed to fail. */ + TEST_VERIFY (dlopen ("tst-dlopenfailmod1.so", RTLD_LAZY) == NULL); + check_relro_link_maps ("after failed dlopen"); + + /* This should succeed. */ + void *handle = xdlopen ("tst-relro-linkmap-mod2.so", RTLD_LAZY); + check_relro_link_maps ("after successful dlopen"); + xdlclose (handle); + check_relro_link_maps ("after dlclose 1"); + + handle = xdlmopen (LM_ID_NEWLM, "tst-relro-linkmap-mod3.so", RTLD_LAZY); + check_relro_link_maps ("after dlmopen"); + xdlclose (handle); + check_relro_link_maps ("after dlclose 2"); + + return 0; +} diff --git a/include/link.h b/include/link.h index 2fddf315d4..45fbab2ae2 100644 --- a/include/link.h +++ b/include/link.h @@ -176,6 +176,9 @@ struct link_map than one namespace. */ struct link_map *l_real; + /* Allocated size of this link map. */ + size_t l_size; + /* Run-time writable fields. */ struct link_map_rw *l_rw; diff --git a/sysdeps/generic/ldsodefs.h b/sysdeps/generic/ldsodefs.h index 0ff0650cb1..d31fa1bb59 100644 --- a/sysdeps/generic/ldsodefs.h +++ b/sysdeps/generic/ldsodefs.h @@ -509,7 +509,10 @@ extern struct rtld_global _rtld_global __rtld_global_attribute__; #endif #ifdef SHARED -/* Implementation structure for the protected memory area. */ +/* Implementation structure for the protected memory area. In static + builds, the protected memory area is just regular (.data) memory, + as there is no RELRO support anyway. Some fields are only needed + for SHARED builds and are not included for static builds. */ struct rtld_protmem { /* Structure describing the dynamic linker itself. */ @@ -1022,6 +1025,9 @@ extern struct link_map *_dl_new_object (char *realname, const char *libname, int mode, Lmid_t nsid) attribute_hidden; +/* Deallocates the specified link map (only the link map itself). */ +void _dl_free_object (struct link_map *) attribute_hidden; + /* Relocate the given object (if it hasn't already been). SCOPE is passed to _dl_lookup_symbol in symbol lookups. If RTLD_LAZY is set in RELOC-MODE, don't relocate its PLT. */ From patchwork Sun Feb 2 21:13:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105883 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 5BA853858408 for ; Sun, 2 Feb 2025 21:23:19 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 5BA853858408 Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=ZEe8LTNA X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTP id 034163858D38 for ; Sun, 2 Feb 2025 21:14:02 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 034163858D38 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 034163858D38 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530843; cv=none; b=gwwU0/n0wZb/95fKS37oX5gC1KeH+xrjewd/p0ooOCOxwLWn/Nhn47docdlthLIb0oSo5E0Y516Q0g+i8hLe7uWPDovQuRchkXfrIAwf5/v0WOW9Z8A2o61VqwojR0tQiJjuQ/jQ82vQ0Qtcn2KC+TZme8Iah1ubFi42D6bOk4I= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530843; c=relaxed/simple; bh=lPT+3FnUE8SKEuYOgD7LtwNC2+Ru9W/SRcnGDAVJb6I=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=vNvCRxkdr4utJ86ge8pTmJvuT14oYD+iQPK4e4c9MG6qDCpT+OcJyP2Uc2KZuO3t7JDuc6fZ5CqiQjXGrVjrWPnLVwHPOG6/S/JbD0h1K/7hbB7FBBIXZKrS1+EO4mTILS/8RhRcRlkTmnp1XTW0UhyJPw4anNJqNLExkL2hac0= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 034163858D38 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530842; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=hq8GKoUey66+MPaNZye9m46gDIdCSszBaJA2LAt0QnA=; b=ZEe8LTNA41GXShjC6Lkm+mOdV5NMWwJRUEJDXuljyjp+D0a3bPcSdSnCcTC5FWG8118+Hh lxd1N05P+GRkgIyM8hDuxd425S1vabV2x9cGzQwUXxSjfrFK/WlhLLpy5Vgcud8IEQA7qW aPo6NpuhtozkmYN72QHcTfMDVfMsf6M= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-629-3VvzW2b9O6KulFkIvQ2sQw-1; Sun, 02 Feb 2025 16:14:01 -0500 X-MC-Unique: 3VvzW2b9O6KulFkIvQ2sQw-1 X-Mimecast-MFC-AGG-ID: 3VvzW2b9O6KulFkIvQ2sQw Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A66281801F16 for ; Sun, 2 Feb 2025 21:14:00 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 089781956094 for ; Sun, 2 Feb 2025 21:13:59 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 12/14] elf: Move most of the _dl_find_object data to the protected heap In-Reply-To: Message-ID: References: X-From-Line: e9363b769761b8014f8691dbac07709957d16347 Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:13:57 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: e6GoMH6k6Pe3ALXgA6Jfdgh4M8m5C8t_f8BvjPqaZlo_1738530840 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-11.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org The heap is mostly read-only by design, so allocation padding is no longer required. The protected heap is not visible to malloc, so it's not necessary to deallocate the allocations during __libc_freeres anymore. Also put critical pointers into the protected memory area. With this change, all control data for _dl_find_object is either RELRO data, or in the protected area, or tightly constrained (the version counter is always masked using & 1 before array indexing). --- elf/dl-find_object.c | 133 ++++++++++--------------------------- elf/dl-find_object.h | 3 - elf/dl-libc_freeres.c | 2 - sysdeps/generic/ldsodefs.h | 9 +++ 4 files changed, 45 insertions(+), 102 deletions(-) diff --git a/elf/dl-find_object.c b/elf/dl-find_object.c index d8d09ffe0b..332f6765a4 100644 --- a/elf/dl-find_object.c +++ b/elf/dl-find_object.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include #include @@ -91,8 +92,9 @@ static struct dl_find_object_internal *_dlfo_nodelete_mappings to avoid data races. The memory allocations are never deallocated, but slots used for - objects that have been dlclose'd can be reused by dlopen. The - memory can live in the regular C malloc heap. + objects that have been dlclose'd can be reused by dlopen. + Allocations come from the protected memory heap. This makes it + harder to inject DWARF data. The segments are populated from the start of the list, with the mappings with the highest address. Only if this segment is full, @@ -111,9 +113,6 @@ struct dlfo_mappings_segment initialization; read in the TM region. */ struct dlfo_mappings_segment *previous; - /* Used by __libc_freeres to deallocate malloc'ed memory. */ - void *to_free; - /* Count of array elements in use and allocated. */ size_t size; /* Read in the TM region. */ size_t allocated; @@ -121,13 +120,6 @@ struct dlfo_mappings_segment struct dl_find_object_internal objects[]; /* Read in the TM region. */ }; -/* To achieve async-signal-safety, two copies of the data structure - are used, so that a signal handler can still use this data even if - dlopen or dlclose modify the other copy. The the least significant - bit in _dlfo_loaded_mappings_version determines which array element - is the currently active region. */ -static struct dlfo_mappings_segment *_dlfo_loaded_mappings[2]; - /* Returns the number of actually used elements in all segments starting at SEG. */ static inline size_t @@ -154,44 +146,15 @@ _dlfo_mappings_segment_count_allocated (struct dlfo_mappings_segment *seg) /* This is essentially an arbitrary value. dlopen allocates plenty of memory anyway, so over-allocated a bit does not hurt. Not having - many small-ish segments helps to avoid many small binary searches. - Not using a power of 2 means that we do not waste an extra page - just for the malloc header if a mapped allocation is used in the - glibc allocator. */ -enum { dlfo_mappings_initial_segment_size = 63 }; - -/* Allocate an empty segment. This used for the first ever - allocation. */ -static struct dlfo_mappings_segment * -_dlfo_mappings_segment_allocate_unpadded (size_t size) -{ - if (size < dlfo_mappings_initial_segment_size) - size = dlfo_mappings_initial_segment_size; - /* No overflow checks here because the size is a mapping count, and - struct link_map is larger than what we allocate here. */ - enum - { - element_size = sizeof ((struct dlfo_mappings_segment) {}.objects[0]) - }; - size_t to_allocate = (sizeof (struct dlfo_mappings_segment) - + size * element_size); - struct dlfo_mappings_segment *result = malloc (to_allocate); - if (result != NULL) - { - result->previous = NULL; - result->to_free = NULL; /* Minimal malloc memory cannot be freed. */ - result->size = 0; - result->allocated = size; - } - return result; -} + many small-ish segments helps to avoid many small binary searches. */ +enum { dlfo_mappings_initial_segment_size = 64 }; /* Allocate an empty segment that is at least SIZE large. PREVIOUS points to the chain of previously allocated segments and can be NULL. */ static struct dlfo_mappings_segment * _dlfo_mappings_segment_allocate (size_t size, - struct dlfo_mappings_segment * previous) + struct dlfo_mappings_segment *previous) { /* Exponential sizing policies, so that lookup approximates a binary search. */ @@ -200,11 +163,10 @@ _dlfo_mappings_segment_allocate (size_t size, if (previous == NULL) minimum_growth = dlfo_mappings_initial_segment_size; else - minimum_growth = 2* previous->allocated; + minimum_growth = 2 * previous->allocated; if (size < minimum_growth) size = minimum_growth; } - enum { cache_line_size_estimate = 128 }; /* No overflow checks here because the size is a mapping count, and struct link_map is larger than what we allocate here. */ enum @@ -212,36 +174,28 @@ _dlfo_mappings_segment_allocate (size_t size, element_size = sizeof ((struct dlfo_mappings_segment) {}.objects[0]) }; size_t to_allocate = (sizeof (struct dlfo_mappings_segment) - + size * element_size - + 2 * cache_line_size_estimate); - char *ptr = malloc (to_allocate); - if (ptr == NULL) + + size * element_size); + struct dlfo_mappings_segment *result = _dl_protmem_allocate (to_allocate); + if (result == NULL) return NULL; - char *original_ptr = ptr; - /* Start and end at a (conservative) 128-byte cache line boundary. - Do not use memalign for compatibility with partially interposing - malloc implementations. */ - char *end = PTR_ALIGN_DOWN (ptr + to_allocate, cache_line_size_estimate); - ptr = PTR_ALIGN_UP (ptr, cache_line_size_estimate); - struct dlfo_mappings_segment *result - = (struct dlfo_mappings_segment *) ptr; result->previous = previous; - result->to_free = original_ptr; result->size = 0; - /* We may have obtained slightly more space if malloc happened - to provide an over-aligned pointer. */ - result->allocated = (((uintptr_t) (end - ptr) - - sizeof (struct dlfo_mappings_segment)) - / element_size); - assert (result->allocated >= size); + result->allocated = size; return result; } /* Monotonic counter for software transactional memory. The lowest - bit indicates which element of the _dlfo_loaded_mappings contains - up-to-date data. */ + bit indicates which element of the GLPM (dlfo_loaded_mappings) + contains up-to-date data. This achieves async-signal-safety for + _dl_find_object: a signal handler can still use the + GLPM (dlfo_loaded_mappings) data even if dlopen or dlclose + modify the other copy. */ static __atomic_wide_counter _dlfo_loaded_mappings_version; +#ifndef SHARED +struct dlfo_mappings_segment *_dlfo_loaded_mappings[2]; +#endif + /* TM version at the start of the read operation. */ static inline uint64_t _dlfo_read_start_version (void) @@ -309,7 +263,7 @@ _dlfo_read_success (uint64_t start_version) static struct dlfo_mappings_segment * _dlfo_mappings_active_segment (uint64_t start_version) { - return _dlfo_loaded_mappings[start_version & 1]; + return GLPM (dlfo_loaded_mappings)[start_version & 1]; } /* Searches PC among the address-sorted array [FIRST1, FIRST1 + @@ -518,10 +472,10 @@ _dlfo_process_initial (void) } else if (l->l_type == lt_loaded) { - if (_dlfo_loaded_mappings[0] != NULL) + if (GLPM (dlfo_loaded_mappings)[0] != NULL) /* Second pass only. */ _dl_find_object_from_map - (l, &_dlfo_loaded_mappings[0]->objects[loaded]); + (l, &GLPM (dlfo_loaded_mappings)[0]->objects[loaded]); ++loaded; } } @@ -577,13 +531,14 @@ _dl_find_object_init (void) /* Allocate the data structures. */ size_t loaded_size = _dlfo_process_initial (); - _dlfo_nodelete_mappings = malloc (_dlfo_nodelete_mappings_size - * sizeof (*_dlfo_nodelete_mappings)); + _dlfo_nodelete_mappings + = _dl_protmem_allocate (_dlfo_nodelete_mappings_size + * sizeof (*_dlfo_nodelete_mappings)); if (loaded_size > 0) - _dlfo_loaded_mappings[0] - = _dlfo_mappings_segment_allocate_unpadded (loaded_size); + GLPM (dlfo_loaded_mappings)[0] + = _dlfo_mappings_segment_allocate (loaded_size, NULL); if (_dlfo_nodelete_mappings == NULL - || (loaded_size > 0 && _dlfo_loaded_mappings[0] == NULL)) + || (loaded_size > 0 && GLPM (dlfo_loaded_mappings)[0] == NULL)) _dl_fatal_printf ("\ Fatal glibc error: cannot allocate memory for find-object data\n"); /* Fill in the data with the second call. */ @@ -599,8 +554,8 @@ Fatal glibc error: cannot allocate memory for find-object data\n"); _dlfo_nodelete_mappings_end = _dlfo_nodelete_mappings[last_idx].map_end; } if (loaded_size > 0) - _dlfo_sort_mappings (_dlfo_loaded_mappings[0]->objects, - _dlfo_loaded_mappings[0]->size); + _dlfo_sort_mappings (GLPM (dlfo_loaded_mappings)[0]->objects, + GLPM (dlfo_loaded_mappings)[0]->size); } static void @@ -654,11 +609,11 @@ _dl_find_object_update_1 (struct link_map **loaded, size_t count) int active_idx = _dlfo_read_version_locked () & 1; struct dlfo_mappings_segment *current_seg - = _dlfo_loaded_mappings[active_idx]; + = GLPM (dlfo_loaded_mappings)[active_idx]; size_t current_used = _dlfo_mappings_segment_count_used (current_seg); struct dlfo_mappings_segment *target_seg - = _dlfo_loaded_mappings[!active_idx]; + = GLPM (dlfo_loaded_mappings)[!active_idx]; size_t remaining_to_add = current_used + count; /* remaining_to_add can be 0 if (current_used + count) wraps, but in practice @@ -687,7 +642,8 @@ _dl_find_object_update_1 (struct link_map **loaded, size_t count) /* The barrier ensures that a concurrent TM read or fork does not see a partially initialized segment. */ - atomic_store_release (&_dlfo_loaded_mappings[!active_idx], target_seg); + atomic_store_release (&GLPM (dlfo_loaded_mappings)[!active_idx], + target_seg); } else /* Start update cycle without allocation. */ @@ -846,20 +802,3 @@ _dl_find_object_dlclose (struct link_map *map) return; } } - -void -_dl_find_object_freeres (void) -{ - for (int idx = 0; idx < 2; ++idx) - { - for (struct dlfo_mappings_segment *seg = _dlfo_loaded_mappings[idx]; - seg != NULL; ) - { - struct dlfo_mappings_segment *previous = seg->previous; - free (seg->to_free); - seg = previous; - } - /* Stop searching in shared objects. */ - _dlfo_loaded_mappings[idx] = NULL; - } -} diff --git a/elf/dl-find_object.h b/elf/dl-find_object.h index e433ff8740..cc2ad9a38f 100644 --- a/elf/dl-find_object.h +++ b/elf/dl-find_object.h @@ -135,7 +135,4 @@ bool _dl_find_object_update (struct link_map *new_map) attribute_hidden; data structures. Needs to be protected by loader write lock. */ void _dl_find_object_dlclose (struct link_map *l) attribute_hidden; -/* Called from __libc_freeres to deallocate malloc'ed memory. */ -void _dl_find_object_freeres (void) attribute_hidden; - #endif /* _DL_FIND_OBJECT_H */ diff --git a/elf/dl-libc_freeres.c b/elf/dl-libc_freeres.c index 093724b765..e728f3b9fa 100644 --- a/elf/dl-libc_freeres.c +++ b/elf/dl-libc_freeres.c @@ -127,6 +127,4 @@ __rtld_libc_freeres (void) void *scope_free_list = GL(dl_scope_free_list); GL(dl_scope_free_list) = NULL; free (scope_free_list); - - _dl_find_object_freeres (); } diff --git a/sysdeps/generic/ldsodefs.h b/sysdeps/generic/ldsodefs.h index d31fa1bb59..42bee8e9ce 100644 --- a/sysdeps/generic/ldsodefs.h +++ b/sysdeps/generic/ldsodefs.h @@ -508,6 +508,8 @@ extern struct rtld_global _rtld_global __rtld_global_attribute__; # undef __rtld_global_attribute__ #endif +struct dlfo_mappings_segment; + #ifdef SHARED /* Implementation structure for the protected memory area. In static builds, the protected memory area is just regular (.data) memory, @@ -517,6 +519,13 @@ struct rtld_protmem { /* Structure describing the dynamic linker itself. */ struct link_map _dl_rtld_map; +#endif /* SHARED */ + + /* Two copies of the data structures for _dl_find_object. See + _dlfo_loaded_mappings_version in dl-find_object.c. */ + EXTERN struct dlfo_mappings_segment *_dlfo_loaded_mappings[2]; + +#ifdef SHARED }; #endif /* SHARED */ From patchwork Sun Feb 2 21:14:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105887 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id EC5DC3858C78 for ; Sun, 2 Feb 2025 21:27:15 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org EC5DC3858C78 Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=KNJ7A34L X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTP id 960A33858C32 for ; Sun, 2 Feb 2025 21:14:09 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 960A33858C32 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 960A33858C32 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530849; cv=none; b=GGQwh0leYDDF+PNIZ7NJiZaR7Eu903Q8hKCN7s2ClyEm584dwnIAYPyO70Eprv/muuedClQ0DW4R1DeHCsBfMu9iU3XeFY0TkTu/sGsWiNvWmUsEsJjLYFJ3o6HeScmhvdoY63cTXQB6/CB6A8au1Deqtr/zQA0DoDQwb5EBUyE= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530849; c=relaxed/simple; bh=5LaJ8lKH+CeOIE/E+et2A7cp8N34Y1kVQ8Vlco2WQSA=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=MEQ71YvR7jnRKJ5TyrAY1Gv8qQhohLuUM2jFJkXwa9yGYkVtH9sr5ym4oj9qVMv3mi5BLk0te9Nvsoozrks6B1LLCPn56dTqNFrLbigY9cDooszZy3/E7Snzt9QwF0N816XuVf019ibJ4J6A9IXz3ESt9iR8nqIjverDh1grtSk= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 960A33858C32 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530849; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=IWxAFSMKYmKghgMvVpB5hK9mKLhmywEk4ySCehDDDMs=; b=KNJ7A34LLfiA39YyDqiO1lXLA3pEkKuGXuXUqszaMFH3Dsk2cGujFBdG2vI+ljbPW5YHki 4oJqFYXoSKq/em7UPaeC6q2cGNtPXkJmN4w5mcoYDtJr1bZtx67mo2afyBPYNasCwwwtF1 8Q4vPleOiy1TKruYJI0CmvJqYANVB0M= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-621-WsvFZVYMMPeUQ3hj_g-hTw-1; Sun, 02 Feb 2025 16:14:07 -0500 X-MC-Unique: WsvFZVYMMPeUQ3hj_g-hTw-1 X-Mimecast-MFC-AGG-ID: WsvFZVYMMPeUQ3hj_g-hTw Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 1DD1719560B0 for ; Sun, 2 Feb 2025 21:14:07 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 3DD5A1800352 for ; Sun, 2 Feb 2025 21:14:05 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 13/14] elf: Add hash tables to speed up DT_NEEDED, dlopen lookups In-Reply-To: Message-ID: <7918fa135a50f2b851fb05f4db060a52e4edf251.1738530302.git.fweimer@redhat.com> References: X-From-Line: 7918fa135a50f2b851fb05f4db060a52e4edf251 Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:14:02 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 3E99-rJ8YdI4ZQIsLZzGvg_2DtQZJf9Ud2_dJu2WETk_1738530847 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-11.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP, T_FILL_THIS_FORM_SHORT autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org Each hash table is specific to one dlopen namespace. For convenience, it uses the GNU symbol hash function, but that choice is arbitrary. The hash tables use the protected memory allocator. The associated aliases are linked from the link map using the dual-use l_name field. See the new l_libname accessor function. This could be changed back to a dedicated field in the private link map if it is necessary to enable applications which write to l_name in a limited fashion. The alloca copy in _dl_load_cache_lookup is no longer needed because _dl_libname_allocate does not use the interposable malloc, and so cannot call back into the dynamic linker. In _dl_map_new_object, check for memory allocation failure and empty tokens during DST expansion. This was handled implicitly before, by falling through to the fd == -1 path (memory allocation failure), or by trying to open "" (empty DST expansion). The rewritten logic in _dl_new_object avoids adding an alias to an object that is identical to the object's file name in l_name. It also special-cases the vDSO case, which is initially created without a name (the soname becomes known only afterwards). The l_soname_added field in the link map is no longer needed because duplicated additions are avoided in _dl_libname_add_alias. --- elf/Makefile | 1 + elf/dl-cache.c | 13 +- elf/dl-close.c | 14 -- elf/dl-libc_freeres.c | 13 -- elf/dl-libname.c | 282 +++++++++++++++++++++++++++++++++++++ elf/dl-libname.h | 120 ++++++++++++++++ elf/dl-load.c | 167 ++++++++-------------- elf/dl-misc.c | 18 --- elf/dl-object.c | 132 ++++++++++------- elf/dl-open.c | 11 +- elf/dl-support.c | 15 +- elf/dl-version.c | 9 +- elf/pldd-xx.c | 19 +-- elf/pldd.c | 1 + elf/rtld.c | 92 ++++-------- elf/setup-vdso.h | 20 ++- elf/sotruss-lib.c | 4 +- include/link.h | 5 +- sysdeps/generic/ldsodefs.h | 28 ++-- 19 files changed, 633 insertions(+), 331 deletions(-) create mode 100644 elf/dl-libname.c create mode 100644 elf/dl-libname.h diff --git a/elf/Makefile b/elf/Makefile index 06b1f1fae5..d285521ce5 100644 --- a/elf/Makefile +++ b/elf/Makefile @@ -63,6 +63,7 @@ dl-routines = \ dl-find_object \ dl-fini \ dl-init \ + dl-libname \ dl-load \ dl-lookup \ dl-lookup-direct \ diff --git a/elf/dl-cache.c b/elf/dl-cache.c index c9c5bf549a..2678da7109 100644 --- a/elf/dl-cache.c +++ b/elf/dl-cache.c @@ -26,6 +26,7 @@ #include <_itoa.h> #include #include +#include /* This is the starting address and the size of the mmap()ed file. */ static struct cache_file *cache; @@ -389,7 +390,7 @@ _dl_cache_libcmp (const char *p1, const char *p2) recursive dlopen and this function must take care that it does not return references to any data in the mapping. */ bool -_dl_load_cache_lookup (const char *name, char **realname) +_dl_load_cache_lookup (const char *name, struct libname **realname) { /* Print a message if the loading of libs is traced. */ if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_LIBS)) @@ -500,15 +501,7 @@ _dl_load_cache_lookup (const char *name, char **realname) return true; } - /* The double copy is *required* since malloc may be interposed - and call dlopen itself whose completion would unmap the data - we are accessing. Therefore we must make the copy of the - mapping data without using malloc. */ - char *temp; - size_t best_len = strlen (best) + 1; - temp = alloca (best_len); - memcpy (temp, best, best_len); - char *copy = __strdup (temp); + struct libname *copy = _dl_libname_allocate (best); if (copy == NULL) return false; *realname = copy; diff --git a/elf/dl-close.c b/elf/dl-close.c index 4865c3560c..7e594cc1ba 100644 --- a/elf/dl-close.c +++ b/elf/dl-close.c @@ -686,20 +686,6 @@ _dl_close_worker (struct link_map *map, bool force) _dl_debug_printf ("\nfile=%s [%lu]; destroying link map\n", imap->l_name, imap->l_ns); - /* This name always is allocated. */ - free (imap->l_name); - /* Remove the list with all the names of the shared object. */ - - struct libname_list *lnp = imap->l_libname; - do - { - struct libname_list *this = lnp; - lnp = lnp->next; - if (!this->dont_free) - free (this); - } - while (lnp != NULL); - /* Remove the searchlists. */ free (imap->l_initfini); diff --git a/elf/dl-libc_freeres.c b/elf/dl-libc_freeres.c index e728f3b9fa..5ca6f8bc18 100644 --- a/elf/dl-libc_freeres.c +++ b/elf/dl-libc_freeres.c @@ -70,19 +70,6 @@ __rtld_libc_freeres (void) { for (l = GL(dl_ns)[ns]._ns_loaded; l != NULL; l = l->l_next) { - struct libname_list *lnp = l->l_libname->next; - - l->l_libname->next = NULL; - - /* Remove all additional names added to the objects. */ - while (lnp != NULL) - { - struct libname_list *old = lnp; - lnp = lnp->next; - if (! old->dont_free) - free (old); - } - /* Free the initfini dependency list. */ if (l->l_free_initfini) free (l->l_initfini); diff --git a/elf/dl-libname.c b/elf/dl-libname.c new file mode 100644 index 0000000000..79a1f6aa6f --- /dev/null +++ b/elf/dl-libname.c @@ -0,0 +1,282 @@ +/* Managing alias names for link names, and link map lookup by name. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include + +#include +#include +#include +#include +#include + +/* Per-namespace hash table of library names. Uses linked lists via + next_hash for collision resolution. Resized once half-full. */ +struct libname_table { uint32_t count; /* Number of +entries in the hash table. */ + uint32_t mask; /* Bucket count minus 1. */ + struct libname **buckets; /* Hash buckets. */ +}; + +#ifndef SHARED +struct libname_table *_dl_libnames[DL_NNS]; +#endif + +struct libname * +_dl_libname_allocate_hash (const char *name, uint32_t hash) +{ + size_t name_len = strlen (name) + 1; + struct libname *result + = _dl_protmem_allocate (offsetof (struct libname, name) + name_len); + if (result == NULL) + return NULL; + result->map = NULL; + result->next_link_map = NULL; + result->next_hash = NULL; + result->hash = _dl_libname_hash (name); + memcpy (result->name, name, name_len); + return result; +} + +struct libname * +_dl_libname_allocate (const char *name) +{ + return _dl_libname_allocate_hash (name, _dl_libname_hash (name)); +} + +void +_dl_libname_free (struct libname *ln) +{ + _dl_protmem_free (ln, + offsetof (struct libname, name) + strlen (ln->name) + 1); +} + +uint32_t +_dl_libname_hash (const char *name) +{ + return _dl_new_hash (name); +} + +/* Returns the appropriate hash chain for the name's HASH in + namespace NSID. */ +static struct libname * +_dl_libname_chain (Lmid_t nsid, uint32_t hash) +{ + struct libname_table *lt = GLPM (dl_libnames)[nsid]; + if (lt == NULL) + return NULL; + return lt->buckets[hash & lt->mask]; +} + +struct link_map * +_dl_libname_lookup_hash (Lmid_t nsid, const char *name, uint32_t hash) +{ + + /* Checking l_prev and l_next verifies that the discovered alias has + been added to a namespace list. It is necessary to add aliases + to the hash table early, before updating the namespace list, so + that _dl_libname_add_alias can avoid adding duplicates. However, + during early startup, the ld.so link map is not added to the list + when the main program is loaded as part of an explicit loader + invocation. If the main program is again ld.so (a user error), + it is not loaded again, violating some core assumptions in + rtld_chain_load and setup_vdso. For static builds, the l_prev + and l_next checks need to be disabled because the main program is + the only map if there is no vDSO (and the hash table is + initialized after the namespace list anyway). */ + for (struct libname *ln = _dl_libname_chain (nsid, hash); + ln != NULL; ln = ln->next_hash) + if (ln->hash == hash && strcmp (name, ln->name) == 0 + && (ln->map->l_faked | ln->map->l_removed) == 0 +#ifdef SHARED + && (ln->map->l_prev != NULL || ln->map->l_next != NULL) +#endif + ) + return ln->map; + return NULL; +} + +struct link_map * +_dl_lookup_map (Lmid_t nsid, const char *name) +{ + return _dl_libname_lookup_hash (nsid, name, _dl_libname_hash (name)); +} + +struct link_map * +_dl_lookup_map_unfiltered (Lmid_t nsid, const char *name) +{ + /* This is only used in dl-version.c, which may rely l_faked + objects. The l_prev/l_next filter is not needed there because + the namespace list update has completed. */ + uint32_t hash = _dl_libname_hash (name); + for (struct libname *ln = _dl_libname_chain (nsid, hash); + ln != NULL; ln = ln->next_hash) + if (ln->hash == hash && strcmp (name, ln->name) == 0 + && (ln->map->l_removed == 0)) + return ln->map; + return NULL; +} + +int +_dl_name_match_p (const char *name, const struct link_map *map) +{ + /* An alternative implementation could use the list of names + starting at l_libname (map), but this implementation is fast even + with many aliases. */ + uint32_t hash = _dl_libname_hash (name); + for (struct libname *ln = _dl_libname_chain (map->l_ns, hash); + ln != NULL; ln = ln->next_hash) + if (ln->hash == hash && ln->map == map && strcmp (name, ln->name) == 0) + return true; + return false; +} + +bool +_dl_libname_table_init (Lmid_t nsid) +{ + struct libname_table *lt = GLPM (dl_libnames)[nsid]; + if (lt != NULL) + return true; + lt = _dl_protmem_allocate (sizeof (*lt)); + if (lt == NULL) + return false; + lt->count = 0; + lt->mask = 15; + size_t buckets_size = (lt->mask + 1) * sizeof (*lt->buckets); + lt->buckets = _dl_protmem_allocate (buckets_size); + if (lt->buckets == NULL) + { + _dl_protmem_free (lt, sizeof (*lt)); + return NULL; + } + memset (lt->buckets, 0, buckets_size); + GLPM (dl_libnames)[nsid] = lt; +#ifndef SHARED + /* _dl_libname_table_init is called from dlopen in the !SHARED case + to set up the hash map. The code in _dl_non_dynamic_init avoids + these allocation in case dlopen is never called. */ + _dl_libname_link_hash (l_libname (GL (dl_ns)[0]._ns_loaded)); +#endif + return true; +} + +void +_dl_libname_add_link_map (struct link_map *l, struct libname *ln) +{ + assert (ln->map == NULL); + ln->map = l; + if (l->l_name == NULL) + l->l_name = ln->name; + else + { + /* Do not override l_name. */ + struct libname *first = l_libname (l); + ln->next_link_map = first->next_link_map; + first->next_link_map = ln; + } +} + +/* Grow LT->buckets. */ +static void +_dl_libname_table_grow (struct libname_table *lt) +{ + uint32_t new_mask = lt->mask * 2 + 1; + struct libname **new_buckets; + size_t new_buckets_size = (new_mask + 1) * sizeof (*new_buckets); + + new_buckets = _dl_protmem_allocate (new_buckets_size); + if (new_buckets == NULL) + /* If the allocation fails, we can just add more bucket collisions. */ + return; + + /* Rehash. */ + memset (new_buckets, 0, new_buckets_size); + for (unsigned int i = 0; i <= lt->mask; ++i) + for (struct libname *ln = lt->buckets[i]; ln != NULL; ) + { + struct libname *next = ln->next_hash; + ln->next_hash = new_buckets[ln->hash & new_mask]; + new_buckets[ln->hash & new_mask] = ln; + ln = next; + } + /* Discard old bucket array. */ + _dl_protmem_free (lt->buckets, + (lt->mask + 1) * sizeof (*lt->buckets)); + /* Switch to new bucket array. */ + lt->buckets = new_buckets; + lt->mask = new_mask; +} + +void +_dl_libname_link_hash (struct libname *lname) +{ + assert (lname->next_hash == NULL); + struct libname_table *lt = GLPM (dl_libnames)[lname->map->l_ns]; + ++lt->count; + if (lt->count * 2 > lt->mask) + _dl_libname_table_grow (lt); + + /* Add the new entry to the end. This prevents overriding the alias + of a different, already-loaded object. */ + struct libname **pln = <->buckets[lname->hash & lt->mask]; + while (*pln != NULL) + pln = &(*pln)->next_hash; + *pln = lname; +} + +void +_dl_libname_unlink_hash (struct libname *lname) +{ + struct libname_table *lt = GLPM (dl_libnames)[lname->map->l_ns]; + struct libname **pln = <->buckets[lname->hash & lt->mask]; + while (*pln != NULL) + { + if (*pln == lname) + { + *pln = lname->next_hash; + lname->next_hash = NULL; + --lt->count; + return; + } + pln = &(*pln)->next_hash; + } + + _dl_fatal_printf ("\ +Fatal glibc error: library name not found on hash chain\n"); +} + +void +_dl_libname_add_alias (struct link_map *l, const char *name) +{ + uint32_t hash = _dl_libname_hash (name); + + /* Check if the name is already present. */ + for (struct libname *ln = _dl_libname_chain (l->l_ns, hash); ln != NULL; + ln = ln->next_hash) + if (ln->hash == hash && ln->map == l && strcmp (name, ln->name) == 0) + return; + + struct libname *ln = _dl_libname_allocate_hash (name, hash); + if (ln == NULL || ! _dl_libname_table_init (l->l_ns)) + { + if (ln != NULL) + _dl_libname_free (ln); + _dl_signal_error (ENOMEM, name, NULL, N_("cannot allocate name record")); + } + _dl_libname_add_link_map (l, ln); + _dl_libname_link_hash (ln); +} diff --git a/elf/dl-libname.h b/elf/dl-libname.h new file mode 100644 index 0000000000..298ae3b3eb --- /dev/null +++ b/elf/dl-libname.h @@ -0,0 +1,120 @@ +/* Managing alias names for link names, and link map lookup by name. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#ifndef DL_LIBNAME_H +#define DL_LIBNAME_H + +#include + +/* A name or alias of a link map. */ +struct libname + { + /* The link map to which this name belongs. */ + struct link_map *map; + + /* Next alias of the same link map. */ + struct libname *next_link_map; + + /* Next library name on the same hash chain. */ + struct libname *next_hash; + + /* GNU hash of the name. See _dl_libname_hash below. */ + uint32_t hash; + + /* Null-terminated name. Must not be modified after allocation. */ + char name[]; + }; + +/* Derive the start of the alias list from the l_name field of the + link map. */ +static inline struct libname * +l_libname (struct link_map *l) +{ + return (struct libname *) (l->l_name - offsetof (struct libname, name)); +} + +/* Return the user-supplied name if available, otherwise the internal + name. */ +static inline const char * +l_libname_last_alias (struct link_map *l) +{ + /* This is the internal name (typically an absolute path). */ + struct libname *ln = l_libname (l); + if (ln->next_link_map != NULL) + /* This is a user-supplied alias. The successor to ln is the + alias that was added last. */ + return ln->next_link_map->name; + else + return ln->name; +} + +/* Deallocate a library name allocated using _dl_libname_allocate + below. */ +void _dl_libname_free (struct libname *name) + attribute_hidden __nonnull ((1)); + +/* Allocate a link map alias name for NAME. The map field, + next_link_map and next_hash are set to NULL, and the hash is + computed based on NAME. */ +struct libname *_dl_libname_allocate (const char *name) + attribute_hidden __attribute_malloc__ __nonnull ((2)) + __attr_dealloc (_dl_libname_free, 1); + +/* Like _dl_libname_allocate, but uses a pre-computed HASH. */ +struct libname *_dl_libname_allocate_hash (const char *name, uint32_t hash) + attribute_hidden __attribute_malloc__ __nonnull ((2)) + __attr_dealloc (_dl_libname_free, 1); + +/* Computes the GNU hash of NAME. */ +uint32_t _dl_libname_hash (const char *name) attribute_hidden __nonnull ((1)); + +/* Looks up the NAME string in hash table for namespace NSID, using + the pre-computed HASH (see _dl_libname_hash). Returns NULL if + NAME has not been loaded into NSID. */ +struct link_map *_dl_libname_lookup_hash (Lmid_t nsid, + const char *name, + uint32_t hash) + attribute_hidden __nonnull ((2)) __attribute__ ((warn_unused_result)); + +/* Links NAME into the alias list for L. Sets NAME->map to L, which + must be NULL originally. */ +void _dl_libname_add_link_map (struct link_map *l, + struct libname *name) + attribute_hidden __nonnull ((1, 2)); + +/* Initalize the hash table for NSID. Must be called at least once + before _dl_libname_link_hash. Returns false if initialization + failed (due to memory allocation failure). */ +bool _dl_libname_table_init (Lmid_t nsid) attribute_hidden; + +/* Links NAME into the hash table for NAME->map->l_ns. */ +void _dl_libname_link_hash (struct libname *name) + attribute_hidden __nonnull ((2)); + +/* Removes NAME from the hash table from NAME->map->l_ns. */ +void _dl_libname_unlink_hash (struct libname *name) + attribute_hidden __nonnull ((2)); + +/* Add an alias name to L (which must contain at least one name in + L-l_name). Raises an exception on memory allocation failure. Does + nothing if NAME is already associated with any object in L's + namespace. */ +void _dl_libname_add_alias (struct link_map *l, const char *name) + attribute_hidden __nonnull ((1, 2)); + +#endif /* DL_LIBNAME_H */ diff --git a/elf/dl-load.c b/elf/dl-load.c index d9ddd6e0a3..d42386e531 100644 --- a/elf/dl-load.c +++ b/elf/dl-load.c @@ -35,6 +35,7 @@ #include #include #include +#include /* Type for the buffer we put the ELF header and hopefully the program header. This buffer does not really have to be too large. In most @@ -377,40 +378,6 @@ expand_dynamic_string_token (struct link_map *l, const char *input) return result; } - -/* Add `name' to the list of names for a particular shared object. - `name' is expected to have been allocated with malloc and will - be freed if the shared object already has this name. - Returns false if the object already had this name. */ -static void -add_name_to_object (struct link_map *l, const char *name) -{ - struct libname_list *lnp, *lastp; - struct libname_list *newname; - size_t name_len; - - lastp = NULL; - for (lnp = l->l_libname; lnp != NULL; lastp = lnp, lnp = lnp->next) - if (strcmp (name, lnp->name) == 0) - return; - - name_len = strlen (name) + 1; - newname = (struct libname_list *) malloc (sizeof *newname + name_len); - if (newname == NULL) - { - /* No more memory. */ - _dl_signal_error (ENOMEM, name, NULL, N_("cannot allocate name record")); - return; - } - /* The object should have a libname set from _dl_new_object. */ - assert (lastp != NULL); - - newname->name = memcpy (newname + 1, name, name_len); - newname->next = NULL; - newname->dont_free = 0; - lastp->next = newname; -} - /* Standard search directories. */ struct r_search_path_struct __rtld_search_dirs attribute_relro; @@ -921,7 +888,7 @@ static #endif struct link_map * _dl_map_object_from_fd (const char *name, const char *origname, int fd, - struct filebuf *fbp, char *realname, + struct filebuf *fbp, struct libname *realname, struct link_map *loader, int l_type, int mode, void **stack_endp, Lmid_t nsid) { @@ -959,13 +926,10 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, _dl_unmap_segments (l); if (l != NULL && l->l_origin != (char *) -1l) free ((char *) l->l_origin); - if (l != NULL && !l->l_libname->dont_free) - free (l->l_libname); if (l != NULL && l->l_phdr_allocated) free ((void *) l->l_phdr); if (l != NULL) _dl_free_object (l); - free (realname); _dl_signal_error (errval, name, NULL, errstring); } @@ -979,8 +943,8 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, /* If the name is not in the list of names for this object add it. */ - free (realname); - add_name_to_object (l, name); + _dl_libname_free (realname); + _dl_libname_add_alias (l, name); return l; } @@ -1023,7 +987,7 @@ _dl_map_object_from_fd (const char *name, const char *origname, int fd, { /* We are not supposed to load the object unless it is already loaded. So return now. */ - free (realname); + _dl_libname_free (realname); __close_nocancel (fd); return NULL; } @@ -1396,18 +1360,16 @@ cannot enable executable stack as shared object requires"); /* When auditing is used the recorded names might not include the name by which the DSO is actually known. Add that as well. */ if (__glibc_unlikely (origname != NULL)) - add_name_to_object (l, origname); - - /* When we profile the SONAME might be needed for something else but - loading. Add it right away. */ - if (__glibc_unlikely (GLRO(dl_profile) != NULL) && l_soname (l) != NULL) - add_name_to_object (l, l_soname (l)); + _dl_libname_add_alias (l, origname); #else /* Audit modules only exist when linking is dynamic so ORIGNAME cannot be non-NULL. */ assert (origname == NULL); #endif + if (l_soname (l) != NULL) + _dl_libname_add_alias (l, l_soname (l)); + /* If we have newly loaded libc.so, update the namespace description. */ if (GL(dl_ns)[nsid].libc_map == NULL @@ -1495,7 +1457,8 @@ print_search_path (struct r_search_path_elem **list, static int open_verify (const char *name, int fd, struct filebuf *fbp, struct link_map *loader, - int whatcode, int mode, bool *found_other_class, bool free_name) + int whatcode, int mode, bool *found_other_class, + struct libname *free_name_on_error) { /* This is the expected ELF header. */ #define ELF32_CLASS ELFCLASS32 @@ -1580,8 +1543,8 @@ open_verify (const char *name, int fd, lose:; struct dl_exception exception; _dl_exception_create (&exception, name, errstring); - if (free_name) - free ((char *) name); + if (free_name_on_error != NULL) + _dl_libname_free (free_name_on_error); __close_nocancel (fd); _dl_signal_exception (errval, &exception, NULL); } @@ -1709,7 +1672,7 @@ open_verify (const char *name, int fd, static int open_path (const char *name, size_t namelen, int mode, - struct r_search_path_struct *sps, char **realname, + struct r_search_path_struct *sps, struct libname **realname, struct filebuf *fbp, struct link_map *loader, int whatcode, bool *found_other_class) { @@ -1764,7 +1727,7 @@ open_path (const char *name, size_t namelen, int mode, _dl_debug_printf (" trying file=%s\n", buf); fd = open_verify (buf, -1, fbp, loader, whatcode, mode, - found_other_class, false); + found_other_class, NULL); if (this_dir->status[cnt] == unknown) { if (fd != -1) @@ -1818,12 +1781,9 @@ open_path (const char *name, size_t namelen, int mode, if (fd != -1) { - *realname = (char *) malloc (buflen); + *realname = _dl_libname_allocate (buf); if (*realname != NULL) - { - memcpy (*realname, buf, buflen); - return fd; - } + return fd; else { /* No memory for the name, we certainly won't be able @@ -1862,38 +1822,6 @@ open_path (const char *name, size_t namelen, int mode, return -1; } -struct link_map * -_dl_lookup_map (Lmid_t nsid, const char *name) -{ - assert (nsid >= 0); - assert (nsid < GL(dl_nns)); - - /* Look for this name among those already loaded. */ - for (struct link_map *l = GL(dl_ns)[nsid]._ns_loaded; l; l = l->l_next) - { - /* If the requested name matches the soname of a loaded object, - use that object. Elide this check for names that have not - yet been opened. */ - if (__glibc_unlikely ((l->l_faked | l->l_removed) != 0)) - continue; - if (!_dl_name_match_p (name, l)) - { - if (__glibc_likely (l->l_soname_added) || l_soname (l) == NULL - || strcmp (name, l_soname (l)) != 0) - continue; - - /* We have a match on a new name -- cache it. */ - add_name_to_object (l, l_soname (l)); - l->l_soname_added = 1; - } - - /* We have a match. */ - return l; - } - - return NULL; -} - /* Map in the shared object file NAME. */ struct link_map * @@ -1902,8 +1830,7 @@ _dl_map_new_object (struct link_map *loader, const char *name, { int fd; const char *origname = NULL; - char *realname; - char *name_copy; + struct libname *realname; struct link_map *l; struct filebuf fb; @@ -2025,7 +1952,7 @@ _dl_map_new_object (struct link_map *loader, const char *name, { /* Check the list of libraries in the file /etc/ld.so.cache, for compatibility with Linux's ldconfig program. */ - char *cached; + struct libname *cached; if (!_dl_load_cache_lookup (name, &cached)) _dl_signal_error (ENOMEM, NULL, NULL, N_("cannot allocate library name")); @@ -2049,10 +1976,11 @@ _dl_map_new_object (struct link_map *loader, const char *name, do { - if (memcmp (cached, dirp, system_dirs_len[cnt]) == 0) + if (memcmp (cached->name, dirp, system_dirs_len[cnt]) + == 0) { /* The prefix matches. Don't use the entry. */ - free (cached); + _dl_libname_free (cached); cached = NULL; break; } @@ -2065,14 +1993,14 @@ _dl_map_new_object (struct link_map *loader, const char *name, if (cached != NULL) { - fd = open_verify (cached, -1, + fd = open_verify (cached->name, -1, &fb, loader ?: GL(dl_ns)[nsid]._ns_loaded, LA_SER_CONFIG, mode, &found_other_class, - false); + NULL); if (__glibc_likely (fd != -1)) realname = cached; else - free (cached); + _dl_libname_free (cached); } } } @@ -2093,21 +2021,36 @@ _dl_map_new_object (struct link_map *loader, const char *name, else { /* The path may contain dynamic string tokens. */ - realname = (loader - ? expand_dynamic_string_token (loader, name) - : __strdup (name)); - if (realname == NULL) - fd = -1; - else + if (loader != NULL && strchr (name, '$') != NULL) { - fd = open_verify (realname, -1, &fb, - loader ?: GL(dl_ns)[nsid]._ns_loaded, 0, mode, - &found_other_class, true); - if (__glibc_unlikely (fd == -1)) - free (realname); + char *expanded = expand_dynamic_string_token (loader, name); + if (expanded == NULL) + realname = NULL; + else if (*expanded == '\0') + { + free (expanded); + _dl_signal_error (0, name, NULL, N_("\ +empty dynamic string token substitution")); + } + else + { + realname = _dl_libname_allocate (expanded); + free (expanded); + } } - } + else + realname = _dl_libname_allocate (name); + if (realname == NULL) + _dl_signal_error (ENOMEM, name, NULL, + N_("cannot allocate library name")); + + fd = open_verify (realname->name, -1, &fb, + loader ?: GL(dl_ns)[nsid]._ns_loaded, 0, mode, + &found_other_class, realname); + if (__glibc_unlikely (fd == -1)) + _dl_libname_free (realname); + } #ifdef SHARED no_file: #endif @@ -2128,11 +2071,13 @@ _dl_map_new_object (struct link_map *loader, const char *name, static const Elf_Symndx dummy_bucket = STN_UNDEF; /* Allocate a new object map. */ - if ((name_copy = __strdup (name)) == NULL + struct libname *name_copy = _dl_libname_allocate (name); + if (name_copy == NULL || (l = _dl_new_object (name_copy, name, type, loader, mode, nsid)) == NULL) { - free (name_copy); + if (name_copy != NULL) + _dl_libname_free (name_copy); _dl_signal_error (ENOMEM, name, NULL, N_("cannot create shared object descriptor")); } diff --git a/elf/dl-misc.c b/elf/dl-misc.c index e669a2b3de..17fb0e55eb 100644 --- a/elf/dl-misc.c +++ b/elf/dl-misc.c @@ -62,24 +62,6 @@ _dl_sysdep_read_whole_file (const char *file, size_t *sizep, int prot) return result; } -/* Test whether given NAME matches any of the names of the given object. */ -int -_dl_name_match_p (const char *name, const struct link_map *map) -{ - if (strcmp (name, map->l_name) == 0) - return 1; - - struct libname_list *runp = map->l_libname; - - while (runp != NULL) - if (strcmp (name, runp->name) == 0) - return 1; - else - runp = runp->next; - - return 0; -} - unsigned long int _dl_higher_prime_number (unsigned long int n) { diff --git a/elf/dl-object.c b/elf/dl-object.c index b28609fa27..a5b4775f96 100644 --- a/elf/dl-object.c +++ b/elf/dl-object.c @@ -22,6 +22,7 @@ #include #include #include +#include #include @@ -55,44 +56,31 @@ _dl_add_to_namespace_list (struct link_map *new, Lmid_t nsid) /* Allocate a `struct link_map' for a new object being loaded, and enter it into the _dl_loaded list. */ struct link_map * -_dl_new_object (char *realname, const char *libname, int type, +_dl_new_object (struct libname *realname, const char *libname, int type, struct link_map *loader, int mode, Lmid_t nsid) { #ifdef SHARED unsigned int naudit; if (__glibc_unlikely ((mode & (__RTLD_OPENEXEC | __RTLD_VDSO)) != 0)) - { - if (mode & __RTLD_OPENEXEC) - { - assert (type == lt_executable); - assert (nsid == LM_ID_BASE); - - /* Ignore the specified libname for the main executable. It is - only known with an explicit loader invocation. */ - libname = ""; - } - - /* We create the map for the executable and vDSO before we know whether - we have auditing libraries and if yes, how many. Assume the - worst. */ + /* We create the map for the executable and vDSO before we know whether + we have auditing libraries and if yes, how many. Assume the + worst. */ naudit = DL_NNS; - } else naudit = GLRO (dl_naudit); #endif - size_t libname_len = strlen (libname) + 1; struct link_map *new; - struct libname_list *newname; #ifdef SHARED size_t audit_space = naudit * sizeof (struct auditstate); #else # define audit_space 0 #endif - size_t l_size = (sizeof (*new) - + sizeof (struct link_map_private *) - + sizeof (*newname) + libname_len); + if (!_dl_libname_table_init (nsid)) + return NULL; + + size_t l_size = (sizeof (*new) + sizeof (struct link_map *)); new = _dl_protmem_allocate (l_size); if (new == NULL) @@ -106,33 +94,67 @@ _dl_new_object (char *realname, const char *libname, int type, return NULL; } + new->l_ns = nsid; new->l_real = new; new->l_symbolic_searchlist.r_list = (struct link_map **) ((char *) (new + 1)); - new->l_libname = newname - = (struct libname_list *) (new->l_symbolic_searchlist.r_list + 1); - newname->name = (char *) memcpy (newname + 1, libname, libname_len); - newname->next = NULL; - newname->dont_free = 1; - - /* When we create the executable link map, or a VDSO link map, we start - with "" for the l_name. In these cases "" points to ld.so rodata - and won't get dumped during core file generation. Therefore to assist - gdb and to create more self-contained core files we adjust l_name to - point at the newly allocated copy (which will get dumped) instead of - the ld.so rodata copy. - - Furthermore, in case of explicit loader invocation, discard the - name of the main executable, to match the regular behavior, where - name of the executable is not known. */ -#ifdef SHARED - if (*realname != '\0' && (mode & __RTLD_OPENEXEC) == 0) -#else - if (*realname != '\0') -#endif - new->l_name = realname; - else - new->l_name = (char *) newname->name + libname_len - 1; + /* When creating the link map for the vDSO, there is no naming + information yet, so do not link in the names. */ + if (!(mode & __RTLD_VDSO)) + { + if ((mode & __RTLD_OPENEXEC)) + { + /* Link map for the main executable. */ + if (realname->name[0] == '\0') + { + /* Not an explicit loader invocation (standard PT_INTERP + usage). Use realname directly. */ + new->l_name = realname->name; + realname->map = new; + _dl_libname_link_hash (realname); + } + else + { + /* Explict loader invocation. Discard the file name for + compatibility with the PT_INTERP invocation. */ + struct libname *newname = _dl_libname_allocate (""); + if (newname == NULL) + { + newname_error: + free (new->l_rw); + _dl_protmem_free (new, l_size); + return NULL; + } + new->l_name = newname->name; + newname->map = new; + _dl_libname_link_hash (newname); + /* NB: realname is freed below. */ + } + } + else + { + /* Regular link map. The file name is in realname. Put it + into l_name, to helper debuggers. */ + new->l_name = realname->name; + + /* There may be a different alias in libname. Store it if + it is different. */ + if (strcmp (libname, realname->name) != 0) + { + struct libname *newname = _dl_libname_allocate (libname); + if (newname == NULL) + goto newname_error; + _dl_libname_add_link_map (new, newname); + _dl_libname_link_hash (newname); + } + + /* This has to come after the potential memory allocation + failure, so that we do not have to revert these changes + on error. */ + realname->map = new; + _dl_libname_link_hash (realname); + } + } new->l_type = type; /* If we set the bit now since we know it is never used we avoid @@ -143,7 +165,6 @@ _dl_new_object (char *realname, const char *libname, int type, #if NO_TLS_OFFSET != 0 new->l_rw->l_tls_offset = NO_TLS_OFFSET; #endif - new->l_ns = nsid; #ifdef SHARED for (unsigned int cnt = 0; cnt < naudit; ++cnt) @@ -193,13 +214,13 @@ _dl_new_object (char *realname, const char *libname, int type, point of view of the kernel, the main executable is the dynamic loader, and this would lead to a computation of the wrong origin. */ - if (realname[0] != '\0') + if (!(mode & __RTLD_VDSO) && realname->name[0] != '\0') { - size_t realname_len = strlen (realname) + 1; + size_t realname_len = strlen (realname->name) + 1; char *origin; char *cp; - if (realname[0] == '/') + if (realname->name[0] == '/') { /* It is an absolute path. Use it. But we have to make a copy since we strip out the trailing slash. */ @@ -249,7 +270,7 @@ _dl_new_object (char *realname, const char *libname, int type, } /* Add the real file name. */ - cp = __mempcpy (cp, realname, realname_len); + cp = __mempcpy (cp, realname->name, realname_len); /* Now remove the filename and the slash. Leave the slash if the name is something like "/foo". */ @@ -266,11 +287,22 @@ _dl_new_object (char *realname, const char *libname, int type, new->l_origin = origin; } + if ((mode & __RTLD_OPENEXEC) && realname->name[0] == '\0') + _dl_libname_free (realname); + return new; } void _dl_free_object (struct link_map *l) { + /* Deallocate the aliases of this link name. */ + for (struct libname *libname = l_libname (l); libname != NULL; ) + { + _dl_libname_unlink_hash (libname); + struct libname *next = libname->next_link_map; + _dl_libname_free (libname); + libname = next; + } _dl_protmem_free (l, l->l_size); } diff --git a/elf/dl-open.c b/elf/dl-open.c index c73c44ff15..5fc019ca63 100644 --- a/elf/dl-open.c +++ b/elf/dl-open.c @@ -38,6 +38,7 @@ #include #include #include +#include #include @@ -79,7 +80,7 @@ struct dl_open_args static void __attribute__ ((noreturn)) add_to_global_resize_failure (struct link_map *new) { - _dl_signal_error (ENOMEM, new->l_libname->name, NULL, + _dl_signal_error (ENOMEM, l_libname_last_alias (new), NULL, N_ ("cannot extend global scope")); } @@ -713,7 +714,7 @@ dl_open_worker_begin (void *a) update_scopes (new); if (!_dl_find_object_update (new)) - _dl_signal_error (ENOMEM, new->l_libname->name, NULL, + _dl_signal_error (ENOMEM, l_libname_last_alias (new), NULL, N_ ("cannot allocate address lookup data")); /* FIXME: It is unclear whether the order here is correct. @@ -863,6 +864,12 @@ no more namespaces available for dlmopen()")); _dl_signal_error (EINVAL, file, NULL, N_("invalid target namespace in dlmopen()")); +#ifndef SHARED + /* This completes initialization of the hash table. */ + if (!_dl_libname_table_init (LM_ID_BASE)) + _dl_signal_error (ENOMEM, NULL, NULL, N_("failed to initialize dlopen")); +#endif + struct dl_open_args args; args.file = file; args.mode = mode; diff --git a/elf/dl-support.c b/elf/dl-support.c index aa2be3e934..a2364dc380 100644 --- a/elf/dl-support.c +++ b/elf/dl-support.c @@ -46,6 +46,7 @@ #include #include #include +#include extern char *__progname; char **_dl_argv = &__progname; /* This is checked for some error messages. */ @@ -77,6 +78,13 @@ const char *_dl_origin_path; /* Nonzero if runtime lookup should not update the .got/.plt. */ int _dl_bind_not; +/* Used to populate _dl_main_map.l_name. */ +static struct libname _dl_main_map_name = + { + .hash = 0x1505, /* GNU hash of the empty string. */ + .name = { 0 }, + }; + /* A dummy link map for the executable, used by dlopen to access the global scope. We don't export any symbols ourselves, so this can be minimal. */ static struct link_map _dl_main_map = @@ -85,7 +93,6 @@ static struct link_map _dl_main_map = .l_rw = &(struct link_map_rw) { .l_tls_offset = NO_TLS_OFFSET, }, .l_real = &_dl_main_map, .l_ns = LM_ID_BASE, - .l_libname = &(struct libname_list) { .name = "", .dont_free = 1 }, .l_searchlist = { .r_list = &(struct link_map *) { &_dl_main_map }, @@ -267,6 +274,12 @@ _dl_aux_init (ElfW(auxv_t) *av) void _dl_non_dynamic_init (void) { + /* Set up of the namespace hash table is delayed until + _dl_libname_table_init is called from dlopen. But l_name should + be initialized properly even if dlopen is never called. */ + _dl_main_map_name.map = &_dl_main_map; + _dl_main_map.l_name = _dl_main_map_name.name; + _dl_main_map.l_origin = _dl_get_origin (); _dl_main_map.l_phdr = GL(dl_phdr); _dl_main_map.l_phnum = GL(dl_phnum); diff --git a/elf/dl-version.c b/elf/dl-version.c index 0fae561e55..17e035bea0 100644 --- a/elf/dl-version.c +++ b/elf/dl-version.c @@ -30,12 +30,9 @@ static inline struct link_map * __attribute ((always_inline)) find_needed (const char *name, struct link_map *map) { - struct link_map *tmap; - - for (tmap = GL(dl_ns)[map->l_ns]._ns_loaded; tmap != NULL; - tmap = tmap->l_next) - if (_dl_name_match_p (name, tmap)) - return tmap; + struct link_map *tmap = _dl_lookup_map_unfiltered (map->l_ns, name); + if (tmap != NULL) + return tmap; struct dl_exception exception; _dl_exception_create_format diff --git a/elf/pldd-xx.c b/elf/pldd-xx.c index 2210e815ca..dc8d99988d 100644 --- a/elf/pldd-xx.c +++ b/elf/pldd-xx.c @@ -33,7 +33,6 @@ struct E(link_map) EW(Addr) l_prev; EW(Addr) l_real; Lmid_t l_ns; - EW(Addr) l_libname; }; #if CLASS == __ELF_NATIVE_CLASS _Static_assert (offsetof (struct link_map, l_addr) @@ -45,16 +44,20 @@ _Static_assert (offsetof (struct link_map, l_next) #endif -struct E(libname_list) +struct E(libname) { - EW(Addr) name; - EW(Addr) next; + EW(Addr) map; + EW(Addr) next_link_map; + EW(Addr) next_hash; + uint32_t gnu_hash; + char name[]; }; #if CLASS == __ELF_NATIVE_CLASS -_Static_assert (offsetof (struct libname_list, name) - == offsetof (struct E(libname_list), name), "name"); -_Static_assert (offsetof (struct libname_list, next) - == offsetof (struct E(libname_list), next), "next"); +_Static_assert (offsetof (struct libname, name) + == offsetof (struct E(libname), name), "name"); +_Static_assert (offsetof (struct libname, next_link_map) + == offsetof (struct E(libname), next_link_map), + "next_link_map"); #endif struct E(r_debug) diff --git a/elf/pldd.c b/elf/pldd.c index 2831e732ed..e1ffd1b5bb 100644 --- a/elf/pldd.c +++ b/elf/pldd.c @@ -31,6 +31,7 @@ #include #include +#include #include /* Global variables. */ diff --git a/elf/rtld.c b/elf/rtld.c index 791a875cce..e9c8bd43c7 100644 --- a/elf/rtld.c +++ b/elf/rtld.c @@ -38,6 +38,7 @@ #include #include #include +#include #include #include #include @@ -387,9 +388,6 @@ struct auditstate _dl_rtld_auditstate[DL_NNS]; static void dl_main (const ElfW(Phdr) *phdr, ElfW(Word) phnum, ElfW(Addr) *user_entry, ElfW(auxv_t) *auxv); -/* These two variables cannot be moved into .data.rel.ro. */ -static struct libname_list _dl_rtld_libname; - /* Variable for statistics. */ RLTD_TIMING_DECLARE (relocate_time, static); RLTD_TIMING_DECLARE (load_time, static, attribute_relro); @@ -1166,10 +1164,9 @@ rtld_setup_main_map (struct link_map *main_map) dlopen call or DT_NEEDED entry, for something that wants to link against the dynamic linker as a shared library, will know that the shared object is already loaded. */ - _dl_rtld_libname.name = ((const char *) main_map->l_addr - + ph->p_vaddr); - /* _dl_rtld_libname.next = NULL; Already zero. */ - GLPM(dl_rtld_map).l_libname = &_dl_rtld_libname; + _dl_libname_add_alias (&GLPM (dl_rtld_map), + (const char *) main_map->l_addr + + ph->p_vaddr); has_interp = true; break; @@ -1251,17 +1248,6 @@ rtld_setup_main_map (struct link_map *main_map) = (char *) main_map->l_tls_initimage + main_map->l_addr; if (! main_map->l_map_end) main_map->l_map_end = ~0; - if (! GLPM(dl_rtld_map).l_libname && GLPM(dl_rtld_map).l_name) - { - /* We were invoked directly, so the program might not have a - PT_INTERP. */ - _dl_rtld_libname.name = GLPM(dl_rtld_map).l_name; - /* _dl_rtld_libname.next = NULL; Already zero. */ - GLPM(dl_rtld_map).l_libname = &_dl_rtld_libname; - } - else - assert (GLPM(dl_rtld_map).l_libname); /* How else did we get here? */ - return has_interp; } @@ -1371,8 +1357,12 @@ dl_main (const ElfW(Phdr) *phdr, char *argv0 = NULL; char **orig_argv = _dl_argv; - /* Note the place where the dynamic linker actually came from. */ - GLPM(dl_rtld_map).l_name = rtld_progname; + /* Note the place where the dynamic linker actually came from. + This sets l_name for the dynamic linker and must lead + debuggers to the ld.so binary (so it cannot be the ABI path, + in case this copy of ld.so is not installed in the correct + place). */ + _dl_libname_add_alias (&GLPM (dl_rtld_map), rtld_progname); while (_dl_argc > 1) if (! strcmp (_dl_argv[1], "--list")) @@ -1561,8 +1551,8 @@ dl_main (const ElfW(Phdr) *phdr, { RTLD_TIMING_VAR (start); rtld_timer_start (&start); - _dl_map_object (NULL, rtld_progname, lt_executable, 0, - __RTLD_OPENEXEC, LM_ID_BASE); + _dl_map_new_object (NULL, rtld_progname, lt_executable, 0, + __RTLD_OPENEXEC, LM_ID_BASE); rtld_timer_stop (&load_time, start); } @@ -1574,11 +1564,6 @@ dl_main (const ElfW(Phdr) *phdr, phdr = main_map->l_phdr; phnum = main_map->l_phnum; - /* We overwrite here a pointer to a malloc()ed string. But since - the malloc() implementation used at this point is the dummy - implementations which has no real free() function it does not - makes sense to free the old string first. */ - main_map->l_name = (char *) ""; *user_entry = main_map->l_entry; /* Set bit indicating this is the main program map. */ @@ -1616,9 +1601,14 @@ dl_main (const ElfW(Phdr) *phdr, { /* Create a link_map for the executable itself. This will be what dlopen on "" returns. */ - main_map = _dl_new_object ((char *) "", "", lt_executable, NULL, - __RTLD_OPENEXEC, LM_ID_BASE); - assert (main_map != NULL); + { + struct libname *ln = _dl_libname_allocate (""); + if (ln == NULL || + (main_map = _dl_new_object (ln, "", lt_executable, NULL, + __RTLD_OPENEXEC, LM_ID_BASE)) + == NULL) + _dl_fatal_printf ("Fatal glibc error: Cannot allocate link map\n"); + } main_map->l_phdr = phdr; main_map->l_phnum = phnum; main_map->l_entry = *user_entry; @@ -1655,20 +1645,8 @@ dl_main (const ElfW(Phdr) *phdr, /* If the current libname is different from the SONAME, add the latter as well. */ - { - const char *soname = l_soname (&GLPM(dl_rtld_map)); - if (soname != NULL - && strcmp (GLPM(dl_rtld_map).l_libname->name, soname) != 0) - { - static struct libname_list newname; - newname.name = soname; - newname.next = NULL; - newname.dont_free = 1; + _dl_libname_add_alias (&GLPM (dl_rtld_map), l_soname (&GLPM (dl_rtld_map))); - assert (GLPM(dl_rtld_map).l_libname->next == NULL); - GLPM(dl_rtld_map).l_libname->next = &newname; - } - } /* The ld.so must be relocated since otherwise loading audit modules will fail since they reuse the very same ld.so. */ assert (GLPM(dl_rtld_map).l_relocated); @@ -1718,13 +1696,6 @@ dl_main (const ElfW(Phdr) *phdr, LM_ID_BASE); r->r_state = RT_CONSISTENT; - /* Put the link_map for ourselves on the chain so it can be found by - name. Note that at this point the global chain of link maps contains - exactly one element, which is pointed to by dl_loaded. */ - if (! GLPM(dl_rtld_map).l_name) - /* If not invoked directly, the dynamic linker shared object file was - found by the PT_INTERP name. */ - GLPM(dl_rtld_map).l_name = (char *) GLPM(dl_rtld_map).l_libname->name; GLPM(dl_rtld_map).l_type = lt_library; main_map->l_next = &GLPM(dl_rtld_map); GLPM(dl_rtld_map).l_prev = main_map; @@ -2084,16 +2055,17 @@ dl_main (const ElfW(Phdr) *phdr, l; l = l->l_next) { if (l->l_faked) /* The library was not found. */ - _dl_printf ("\t%s => not found\n", l->l_libname->name); - else if (strcmp (l->l_libname->name, l->l_name) == 0) + _dl_printf ("\t%s => not found\n", l_libname_last_alias (l)); + else if (strcmp (l_libname_last_alias (l), l->l_name) + == 0) /* Print vDSO like libraries without duplicate name. Some consumers depend of this format. */ - _dl_printf ("\t%s (0x%0*zx)\n", l->l_libname->name, + _dl_printf ("\t%s (0x%0*zx)\n", l_libname_last_alias (l), (int) sizeof l->l_map_start * 2, (size_t) l->l_map_start); else _dl_printf ("\t%s => %s (0x%0*zx)\n", - DSO_FILENAME (l->l_libname->name), + DSO_FILENAME (l_libname_last_alias (l)), DSO_FILENAME (l->l_name), (int) sizeof l->l_map_start * 2, (size_t) l->l_map_start); @@ -2272,17 +2244,7 @@ dl_main (const ElfW(Phdr) *phdr, { struct link_map *l = main_map->l_initfini[i]; - /* While we are at it, help the memory handling a bit. We have to - mark some data structures as allocated with the fake malloc() - implementation in ld.so. */ - struct libname_list *lnp = l->l_libname->next; - - while (__builtin_expect (lnp != NULL, 0)) - { - lnp->dont_free = 1; - lnp = lnp->next; - } - /* Also allocated with the fake malloc(). */ + /* Allocated with the fake malloc. */ l->l_free_initfini = 0; _dl_relocate_object (l, l->l_scope, GLRO(dl_lazy) ? RTLD_LAZY : 0, diff --git a/elf/setup-vdso.h b/elf/setup-vdso.h index fd5a1314bd..5260ac041c 100644 --- a/elf/setup-vdso.h +++ b/elf/setup-vdso.h @@ -29,9 +29,11 @@ setup_vdso (struct link_map *main_map __attribute__ ((unused)), better be, since it's read-only and so we couldn't relocate it). We just want our data structures to describe it as if we had just mapped and relocated it normally. */ - struct link_map *l = _dl_new_object ((char *) "", "", lt_library, NULL, - __RTLD_VDSO, LM_ID_BASE); - if (__glibc_likely (l != NULL)) + struct link_map *l = _dl_new_object (NULL, NULL, lt_library, + NULL, __RTLD_VDSO, LM_ID_BASE); + if (l == NULL) + _dl_fatal_printf ("Fatal glibc error: cannot allocate vDSO link map"); + else { l->l_phdr = ((const void *) GLRO(dl_sysinfo_dso) + GLRO(dl_sysinfo_dso)->e_phoff); @@ -75,15 +77,9 @@ setup_vdso (struct link_map *main_map __attribute__ ((unused)), l->l_local_scope[0]->r_list = &l->l_real; /* Now that we have the info handy, use the DSO image's soname - so this object can be looked up by name. */ - { - const char *dsoname = l_soname (l); - if (dsoname != NULL) - { - l->l_libname->name = dsoname; - l->l_name = (char *) dsoname; - } - } + so this object can be looked up by name. Use "" as the dummy + name. */ + _dl_libname_add_alias (l, l_soname (l) ?: ""); /* Add the vDSO to the object list. */ _dl_add_to_namespace_list (l, LM_ID_BASE); diff --git a/elf/sotruss-lib.c b/elf/sotruss-lib.c index c43980cf44..3b3fb2aa45 100644 --- a/elf/sotruss-lib.c +++ b/elf/sotruss-lib.c @@ -27,7 +27,7 @@ #include #include - +#include extern const char *__progname; extern const char *__progname_full; @@ -173,7 +173,7 @@ la_objopen (struct link_map *map, Lmid_t lmid, uintptr_t *cookie) int result = 0; const char *print_name = NULL; - for (struct libname_list *l = map->l_libname; l != NULL; l = l->next) + for (struct libname *l = l_libname (map); l != NULL; l = l->next_link_map) { if (print_name == NULL || (print_name[0] == '/' && l->name[0] != '/')) print_name = l->name; diff --git a/include/link.h b/include/link.h index 45fbab2ae2..a4017f3a5d 100644 --- a/include/link.h +++ b/include/link.h @@ -54,7 +54,7 @@ extern unsigned int la_objopen (struct link_map *__map, Lmid_t __lmid, /* Some internal data structures of the dynamic linker used in the linker map. We only provide forward declarations. */ -struct libname_list; +struct libname; struct r_found_version; struct r_search_path_elem; @@ -185,7 +185,6 @@ struct link_map /* Number of the namespace this link map belongs to. */ Lmid_t l_ns; - struct libname_list *l_libname; /* Indexed pointers to dynamic section. [0,DT_NUM) are indexed by the processor-independent tags. [DT_NUM,DT_NUM+DT_THISPROCNUM) are indexed by the tag minus DT_LOPROC. @@ -257,8 +256,6 @@ struct link_map unsigned int l_map_done:1; /* of maps in _dl_close_worker. */ unsigned int l_phdr_allocated:1; /* Nonzero if the data structure pointed to by `l_phdr' is allocated. */ - unsigned int l_soname_added:1; /* Nonzero if the SONAME is for sure in - the l_libname list. */ unsigned int l_faked:1; /* Nonzero if this is a faked descriptor without associated file. */ unsigned int l_need_tls_init:1; /* Nonzero if GL(dl_init_static_tls) diff --git a/sysdeps/generic/ldsodefs.h b/sysdeps/generic/ldsodefs.h index 42bee8e9ce..c8992b0661 100644 --- a/sysdeps/generic/ldsodefs.h +++ b/sysdeps/generic/ldsodefs.h @@ -228,16 +228,7 @@ struct r_strlenpair size_t len; }; - -/* A data structure for a simple single linked list of strings. */ -struct libname_list - { - const char *name; /* Name requested (before search). */ - struct libname_list *next; /* Link to next name for this object. */ - int dont_free; /* Flag whether this element should be freed - if the object is not entirely unloaded. */ - }; - +struct libname_table; /* See dl-libname.c. */ /* DSO sort algorithm to use (check dl-sort-maps.c). */ enum dso_sort_algorithm @@ -525,6 +516,9 @@ struct rtld_protmem _dlfo_loaded_mappings_version in dl-find_object.c. */ EXTERN struct dlfo_mappings_segment *_dlfo_loaded_mappings[2]; + /* Per-namespace hash tables for library name lookup. */ + EXTERN struct libname_table *_dl_libnames[DL_NNS]; + #ifdef SHARED }; #endif /* SHARED */ @@ -933,6 +927,10 @@ rtld_hidden_proto (_dl_catch_exception) struct link_map *_dl_lookup_map (Lmid_t nsid, const char *name) attribute_hidden; +/* Like _dl_lookup_map, but returns l_removed and l_fake objects as well. */ +struct link_map *_dl_lookup_map_unfiltered (Lmid_t nsid, const char *name) + attribute_hidden; + /* Open the shared object NAME and map in its segments. LOADER's DT_RPATH is used in searching for NAME. If the object is already opened, returns its existing map. */ @@ -1029,10 +1027,10 @@ extern void _dl_add_to_namespace_list (struct link_map *new, Lmid_t nsid) attribute_hidden; /* Allocate a `struct link_map' for a new object being loaded. */ -extern struct link_map *_dl_new_object (char *realname, const char *libname, - int type, struct link_map *loader, - int mode, Lmid_t nsid) - attribute_hidden; +struct link_map *_dl_new_object (struct libname *realname, + const char *libname, int type, + struct link_map *loader, + int mode, Lmid_t nsid) attribute_hidden; /* Deallocates the specified link map (only the link map itself). */ void _dl_free_object (struct link_map *) attribute_hidden; @@ -1152,7 +1150,7 @@ const struct r_strlenpair *_dl_important_hwcaps (const char *prepend, attribute_hidden; /* Look up NAME in ld.so.cache. */ -bool _dl_load_cache_lookup (const char *name, char **realname) +bool _dl_load_cache_lookup (const char *name, struct libname **realname) attribute_hidden __nonnull ((1, 2)) __attribute__ ((warn_unused_result)); /* If the system does not support MAP_COPY we cannot leave the file open From patchwork Sun Feb 2 21:14:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Florian Weimer X-Patchwork-Id: 105886 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 810E1385840B for ; Sun, 2 Feb 2025 21:25:30 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 810E1385840B Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=OOX4Wwqh X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by sourceware.org (Postfix) with ESMTP id 701803858405 for ; Sun, 2 Feb 2025 21:14:15 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 701803858405 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=redhat.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 701803858405 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530855; cv=none; b=QsRegX+7SFK4ksZr9wdprBg+aP8s0+ijwZtW2yQ9io2caBH2/+K+uWNTDAONF4Basp7764g6f81WFA8HZx5c1AJxHn7HRKxpC9MAqj0JWjUx8Ws7T62GdK2KH4BePew3NY4XU07MhLf6P8wUm4apuj+SsI8rUPlSjitX6lzAekA= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1738530855; c=relaxed/simple; bh=KDVSVNQ+YM7q3v4+GrHN/vSR2ZL8nAJdo3aW5OkDfFM=; h=DKIM-Signature:From:To:Subject:Message-ID:Date:MIME-Version; b=Hebe3cku8ELo+NV8FtonPRG8LTiJ0pxpi0lES1f3V0tEUQD90GHYJtR5Q8I1dda2zf9z51ntqXMU2UfJ3S5eN532bb4JZo3298QYvFc1+ie+fn//Zl6VXWckv0dDTj6hJ9xFXKt3gs/di4SyHzYrmMF+MlmxGmz6hhmmE7/dHt4= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 701803858405 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1738530855; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=wYM7EwxF/OdaT8h2F2Dn5rIu6wr9BLEn/cghWl8IDXQ=; b=OOX4WwqhF+BrMZbASKe92kFl8E5HjIaROGMX9YAHdbpKlLPDZhEfKGvC2fP6aqviRkaunr u58rJIvsKRjzYnn438NL3Fq1b8HDAyiIZ9xqnJuE+KNZYPs36a8x+g6X/f6rISAMYNNLCM xkrd4q0h7zvR8QnMBVcza4a+fchYUZs= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-671-5UGbLWoRODO_gkjOABeWng-1; Sun, 02 Feb 2025 16:14:13 -0500 X-MC-Unique: 5UGbLWoRODO_gkjOABeWng-1 X-Mimecast-MFC-AGG-ID: 5UGbLWoRODO_gkjOABeWng Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 123DA19560B2 for ; Sun, 2 Feb 2025 21:14:13 +0000 (UTC) Received: from oldenburg.str.redhat.com (unknown [10.2.16.2]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id D596A180035E for ; Sun, 2 Feb 2025 21:14:11 +0000 (UTC) From: Florian Weimer To: libc-alpha@sourceware.org Subject: [PATCH v4 14/14] elf: Use memory protection keys for the protected memory allocator In-Reply-To: Message-ID: <79f918d07d20fdf287ab747ae41392ee0a8b80b9.1738530302.git.fweimer@redhat.com> References: X-From-Line: 79f918d07d20fdf287ab747ae41392ee0a8b80b9 Mon Sep 17 00:00:00 2001 Date: Sun, 02 Feb 2025 22:14:09 +0100 User-Agent: Gnus/5.13 (Gnus v5.13) MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: rWz8rGpHAydacFDk3JiuV1OvvDMzZVMQb3f6mOXWzDk_1738530853 X-Mimecast-Originator: redhat.com X-Spam-Status: No, score=-11.3 required=5.0 tests=BAYES_00, DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H5, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces~patchwork=sourceware.org@sourceware.org If protection keys are not supported by the system, fall back to switching permission flags using mprotect. A complication arises on x86 because the kernel supports protection keys, but they are incompatible with dynamic linker requirements (see bug 22396). Therefore, protection key support is disabled by default on x86, but glibc.rtld.protmem=3 can still force enabling it. --- NEWS | 4 +- elf/Makefile | 10 ++ elf/dl-diagnostics.c | 2 + elf/dl-protmem.c | 143 +++++++++++++++++- elf/dl-protmem.h | 9 ++ elf/dl-tunables.list | 6 + elf/tst-dl-protmem.c | 10 ++ elf/tst-relro-linkmap-disabled-mod1.c | 46 ++++++ elf/tst-relro-linkmap-disabled-mod2.c | 2 + elf/tst-relro-linkmap-disabled.c | 64 ++++++++ elf/tst-rtld-list-tunables.exp | 1 + manual/tunables.texi | 29 ++++ nptl/pthread_create.c | 8 + sysdeps/generic/dl-protmem-pkey.h | 20 +++ sysdeps/generic/ldsodefs.h | 5 + sysdeps/unix/sysv/linux/dl-protmem-pkey.h | 23 +++ sysdeps/unix/sysv/linux/dl-sysdep.c | 2 + sysdeps/unix/sysv/linux/x86/dl-protmem-pkey.h | 26 ++++ 18 files changed, 404 insertions(+), 6 deletions(-) create mode 100644 elf/tst-relro-linkmap-disabled-mod1.c create mode 100644 elf/tst-relro-linkmap-disabled-mod2.c create mode 100644 elf/tst-relro-linkmap-disabled.c create mode 100644 sysdeps/generic/dl-protmem-pkey.h create mode 100644 sysdeps/unix/sysv/linux/dl-protmem-pkey.h create mode 100644 sysdeps/unix/sysv/linux/x86/dl-protmem-pkey.h diff --git a/NEWS b/NEWS index e2e40e141c..9e5fcb1f79 100644 --- a/NEWS +++ b/NEWS @@ -9,7 +9,9 @@ Version 2.42 Major new features: - [Add new features here] +* The dynamic linker keeps link maps and other data structures read-only + most of the time (RELRO link maps). This behavior can be controlled + by the new glibc.rtld.protmem tunable. Deprecated and removed features, and other changes affecting compatibility: diff --git a/elf/Makefile b/elf/Makefile index d285521ce5..9ca1c5c823 100644 --- a/elf/Makefile +++ b/elf/Makefile @@ -539,6 +539,7 @@ tests-internal += \ tst-hash-collision3 \ tst-ptrguard1 \ tst-relro-linkmap \ + tst-relro-linkmap-disabled \ tst-stackguard1 \ tst-tls-surplus \ tst-tls3 \ @@ -981,6 +982,8 @@ modules-names += \ tst-recursive-tlsmod13 \ tst-recursive-tlsmod14 \ tst-recursive-tlsmod15 \ + tst-relro-linkmap-disabled-mod1 \ + tst-relro-linkmap-disabled-mod2 \ tst-relro-linkmap-mod1 \ tst-relro-linkmap-mod2 \ tst-relro-linkmap-mod3 \ @@ -3406,3 +3409,10 @@ LDFLAGS-tst-relro-linkmap = -Wl,-E $(objpfx)tst-relro-linkmap: $(objpfx)tst-relro-linkmap-mod1.so $(objpfx)tst-relro-linkmap.out: $(objpfx)tst-dlopenfailmod1.so \ $(objpfx)tst-relro-linkmap-mod2.so $(objpfx)tst-relro-linkmap-mod3.so + +tst-relro-linkmap-disabled-ENV = GLIBC_TUNABLES=glibc.rtld.protmem=0 +$(objpfx)tst-relro-linkmap-disabled: \ + $(objpfx)tst-relro-linkmap-disabled-mod1.so +$(objpfx)tst-relro-linkmap-disabled.out: $(objpfx)tst-dlopenfailmod1.so \ + $(objpfx)tst-relro-linkmap-disabled-mod2.so \ + $(objpfx)tst-relro-linkmap-mod3.so diff --git a/elf/dl-diagnostics.c b/elf/dl-diagnostics.c index fb2cfbeeb8..049a28b1e3 100644 --- a/elf/dl-diagnostics.c +++ b/elf/dl-diagnostics.c @@ -242,6 +242,8 @@ _dl_print_diagnostics (char **environ) ("dl_hwcaps_subdirs_active", _dl_hwcaps_subdirs_active ()); _dl_diagnostics_print_labeled_value ("dl_pagesize", GLRO (dl_pagesize)); _dl_diagnostics_print_labeled_string ("dl_platform", GLRO (dl_platform)); + _dl_diagnostics_print_labeled_value ("dl_protmem_key", + (unsigned int) GLRO (dl_protmem_key)); _dl_diagnostics_print_labeled_string ("dl_profile_output", GLRO (dl_profile_output)); diff --git a/elf/dl-protmem.c b/elf/dl-protmem.c index 453657b3c2..ebcbfddf75 100644 --- a/elf/dl-protmem.c +++ b/elf/dl-protmem.c @@ -20,11 +20,17 @@ #include #include +#include #include #include #include +#if IS_IN (rtld) +# define TUNABLE_NAMESPACE rtld +# include +#endif + /* Nesting counter for _dl_protmem_begin/_dl_protmem_end. This is primaryly required because we may have a call sequence dlopen, malloc, dlopen. Without the counter, _dl_protmem_end in the inner @@ -39,6 +45,93 @@ _dl_protmem_state (void) - offsetof (struct dl_protmem_state, protmem)); } +/* Memory protection key management. */ + +/* Allocate the protection key and if successful, apply it to the + original region. Return true if protected memory is enabled. */ +static bool +_dl_protmem_key_init (void *initial_region, size_t initial_size) +{ + GLRO (dl_protmem_key) = -1; + +#if IS_IN (rtld) /* Disabled for tst-dl-protmem. */ + int pkey_config = TUNABLE_GET (protmem, size_t, NULL); + if (pkey_config == 0) + /* Disable the protected memory allocator completely. */ + return false; + if (pkey_config == 1) + /* Force the use of mprotect. */ + return true; + +# if DL_PROTMEM_PKEY_SUPPORT + /* For tunables values 2 or 3, potentially use memory protection + keys. Do not enable protection keys for pkey_config == 2 by + default for !DL_PROTMEM_PKEY_SUPPORT. Used on x86, see + sysdeps/unix/sysv/linux/x86/dl-protmem-pkey.h. */ + if (DL_PROTMEM_PKEY_ENABLE || pkey_config >= 3) + GLRO (dl_protmem_key) = pkey_alloc (0, 0); +# endif /* !DL_PROTMEM_PKEY_SUPPORT */ +#endif /* IS_IN (rtld) */ + return true; +} + +/* Try to use the protection key to enable writing. Return true if + protection keys are in use. */ +static bool +_dl_protmem_key_allow (void) +{ +#if DL_PROTMEM_PKEY_SUPPORT + /* Enable write access at the beginning. */ + if (GLRO (dl_protmem_key) >= 0) + { + pkey_set (GLRO (dl_protmem_key), 0); + return true; + } +#endif + return false; +} + +/* Try to use the protection key to disable writing. Return true if + protection keys are in use. */ +static bool +_dl_protmem_key_deny (void) +{ +#if DL_PROTMEM_PKEY_SUPPORT + /* Enable write access at the beginning. */ + if (GLRO (dl_protmem_key) >= 0) + { + pkey_set (GLRO (dl_protmem_key), PKEY_DISABLE_WRITE); + return true; + } +#endif + return false; +} + +/* Creates an anonymous memory mapping as backing store. Applies the + protection key if necessary. Returns NULL on failure. */ +static void * +_dl_protmem_mmap (size_t size) +{ + void *result = __mmap (NULL, size, PROT_READ | PROT_WRITE, + MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); + if (result == MAP_FAILED) + return NULL; +#if DL_PROTMEM_PKEY_SUPPORT + if (GLRO (dl_protmem_key) >= 0) + { + if (__pkey_mprotect (result, size, PROT_READ | PROT_WRITE, + GLRO (dl_protmem_key)) != 0) + { + __munmap (result, size); + return NULL; + } + } +#endif + return result; +} + +/* Actual allocator follows. */ + /* Address of a chunk on the free list. This is an abstract pointer, never to be dereferenced explictly. Use the accessor functions below instead. @@ -232,6 +325,41 @@ _dl_protmem_init (void) _dl_protmem_begin_count = 1; } +void +_dl_protmem_init_2 (void) +{ + struct dl_protmem_state *state = _dl_protmem_state (); + if (!_dl_protmem_key_init (state, DL_PROTMEM_INITIAL_REGION_SIZE)) + /* Make _dl_protmem_end a no-op. */ + ++_dl_protmem_begin_count; + + /* Apply the protection key to the existing memory regions. */ +#if DL_PROTMEM_PKEY_SUPPORT + if (GLRO (dl_protmem_key) >= 0) + { + size_t region_size = DL_PROTMEM_INITIAL_REGION_SIZE; + for (unsigned int i = 0; i < array_length (state->regions); ++i) + if (state->regions[i] != NULL) + { + if (__pkey_mprotect (state->regions[i], region_size, + PROT_READ | PROT_WRITE, GLRO (dl_protmem_key)) + != 0) + { + if (i == 0) + /* If the first pkey_mprotect failed, we can allow + reuse of the key. Otherwise, other memory still + use the key. */ + __pkey_free (GLRO (dl_protmem_key)); + /* Always stop using protection keys on pkey_mprotect + failure. */ + GLRO (dl_protmem_key) = -1; + break; + } + } + } +#endif +} + void _dl_protmem_begin (void) { @@ -239,6 +367,9 @@ _dl_protmem_begin (void) /* Already unprotected. */ return; + if (_dl_protmem_key_allow ()) + return; + struct dl_protmem_state *state = _dl_protmem_state (); size_t region_size = DL_PROTMEM_INITIAL_REGION_SIZE; for (unsigned int i = 0; i < array_length (state->regions); ++i) @@ -246,8 +377,8 @@ _dl_protmem_begin (void) { if (__mprotect (state->regions[i], region_size, PROT_READ | PROT_WRITE) != 0) - _dl_signal_error (ENOMEM, NULL, NULL, - "Cannot make protected memory writable"); + _dl_signal_error (ENOMEM, NULL, NULL, + "Cannot make protected memory writable"); region_size *= 2; } } @@ -258,6 +389,9 @@ _dl_protmem_end (void) if (--_dl_protmem_begin_count > 0) return; + if (_dl_protmem_key_deny ()) + return; + struct dl_protmem_state *state = _dl_protmem_state (); size_t region_size = DL_PROTMEM_INITIAL_REGION_SIZE; for (unsigned int i = 0; i < array_length (state->regions); ++i) @@ -347,9 +481,8 @@ _dl_protmem_allocate (size_t requested_size) { if (state->regions[i] == NULL && region_size >= requested_size) { - void *ptr = __mmap (NULL, region_size, PROT_READ | PROT_WRITE, - MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); - if (ptr == MAP_FAILED) + void *ptr = _dl_protmem_mmap (region_size); + if (ptr == NULL) return NULL; state->regions[i] = ptr; if (region_size == requested_size) diff --git a/elf/dl-protmem.h b/elf/dl-protmem.h index 32182053a5..a1bab3ffdb 100644 --- a/elf/dl-protmem.h +++ b/elf/dl-protmem.h @@ -36,6 +36,10 @@ functions below. Implies the first _dl_protmem_begin call. */ void _dl_protmem_init (void) attribute_hidden; +/* Second phase of initialization. This enables configuration by + tunables. */ +void _dl_protmem_init_2 (void) attribute_hidden; + /* Frees memory allocated using _dl_protmem_allocate. The passed size must be the same that was passed to _dl_protmem_allocate. Protected memory must be writable when this function is called. */ @@ -67,6 +71,11 @@ void _dl_protmem_end (void) attribute_hidden; #include +static inline void +_dl_protmem_init (void) +{ +} + static inline void * _dl_protmem_allocate (size_t size) { diff --git a/elf/dl-tunables.list b/elf/dl-tunables.list index 0b6721bc51..7310ad100d 100644 --- a/elf/dl-tunables.list +++ b/elf/dl-tunables.list @@ -141,6 +141,12 @@ glibc { maxval: 1 default: 1 } + protmem { + type: INT_32 + default: 2 + minval: 0 + maxval: 3 + } } mem { diff --git a/elf/tst-dl-protmem.c b/elf/tst-dl-protmem.c index 66064df777..99baa0a3e6 100644 --- a/elf/tst-dl-protmem.c +++ b/elf/tst-dl-protmem.c @@ -163,8 +163,17 @@ record_free (void *p, size_t size) #define SHARED #include +/* We need to make available these internal functions under their + public names. */ +#define __pkey_alloc pkey_alloc +#define __pkey_free pkey_free +#define __pkey_get pkey_get +#define __pkey_mprotect pkey_mprotect +#define __pkey_set pkey_set + /* Create our own version of GLRO (dl_protmem). */ static struct rtld_protmem *dl_protmem; +static int dl_protmem_key; #undef GLRO #define GLRO(x) x @@ -261,6 +270,7 @@ do_test (void) { dl_protmem = _dl_protmem_bootstrap (); _dl_protmem_init (); + _dl_protmem_init_2 (); /* Perform a random allocations in a loop. */ srand (1); diff --git a/elf/tst-relro-linkmap-disabled-mod1.c b/elf/tst-relro-linkmap-disabled-mod1.c new file mode 100644 index 0000000000..3ad073d3e1 --- /dev/null +++ b/elf/tst-relro-linkmap-disabled-mod1.c @@ -0,0 +1,46 @@ +/* Module with the checking function for read-write link maps. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include + +/* Export for use by the main program, to avoid copy relocations on + _r_debug. */ +struct r_debug_extended *const r_debug_extended_address + = (struct r_debug_extended *) &_r_debug; + +/* The real definition is in the main program. */ +void +check_rw_link_maps (const char *context) +{ + puts ("error: check_relro_link_maps not interposed"); + _exit (1); +} + +static void __attribute__ ((constructor)) +init (void) +{ + check_rw_link_maps ("ELF fini (DSO)"); +} + +static void __attribute__ ((constructor)) +fini (void) +{ + check_rw_link_maps ("ELF destructor (DSO)"); +} diff --git a/elf/tst-relro-linkmap-disabled-mod2.c b/elf/tst-relro-linkmap-disabled-mod2.c new file mode 100644 index 0000000000..33d2e78542 --- /dev/null +++ b/elf/tst-relro-linkmap-disabled-mod2.c @@ -0,0 +1,2 @@ +/* Same checking as the first module, but loaded via dlopen. */ +#include "tst-relro-linkmap-disabled-mod1.c" diff --git a/elf/tst-relro-linkmap-disabled.c b/elf/tst-relro-linkmap-disabled.c new file mode 100644 index 0000000000..ec1195fe44 --- /dev/null +++ b/elf/tst-relro-linkmap-disabled.c @@ -0,0 +1,64 @@ +/* Verify that link maps are writable if configured so by tunable. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include +#include +#include + +/* Defined in tst-relro-linkmap-disabled-mod.so. */ +extern struct r_debug_extended *const r_debug_extended_address; + +/* Check that link maps are writable in all namespaces. */ +void +check_rw_link_maps (const char *context) +{ + for (struct r_debug_extended *r = r_debug_extended_address; + r != NULL; r = r->r_next) + for (struct link_map_public *l = r->base.r_map; l != NULL; l = l->l_next) + support_memprobe_readwrite (context, l, sizeof (*l)); +} + +static int +do_test (void) +{ + check_rw_link_maps ("initial"); + + /* This is supposed to fail. */ + TEST_VERIFY (dlopen ("tst-dlopenfailmod1.so", RTLD_LAZY) == NULL); + check_rw_link_maps ("after failed dlopen"); + + void *handle = xdlopen ("tst-relro-linkmap-disabled-mod2.so", RTLD_LAZY); + check_rw_link_maps ("after dlopen"); + xdlclose (handle); + check_rw_link_maps ("after dlclose"); + + /* NB: no checking inside the namespace. */ + handle = xdlmopen (LM_ID_NEWLM, "tst-relro-linkmap-mod3.so", RTLD_LAZY); + check_rw_link_maps ("after dlmopen"); + xdlclose (handle); + check_rw_link_maps ("after dlclose 2"); + + handle = xdlopen ("tst-relro-linkmap-disabled-mod2.so", RTLD_LAZY); + check_rw_link_maps ("after dlopen 2"); + /* Run the destructor during process exit. */ + + return 0; +} + +#include diff --git a/elf/tst-rtld-list-tunables.exp b/elf/tst-rtld-list-tunables.exp index 9f5990f340..30976e728e 100644 --- a/elf/tst-rtld-list-tunables.exp +++ b/elf/tst-rtld-list-tunables.exp @@ -16,3 +16,4 @@ glibc.rtld.enable_secure: 0 (min: 0, max: 1) glibc.rtld.execstack: 1 (min: 0, max: 1) glibc.rtld.nns: 0x4 (min: 0x1, max: 0x10) glibc.rtld.optional_static_tls: 0x200 (min: 0x0, max: 0x[f]+) +glibc.rtld.protmem: 2 (min: 0, max: 3) diff --git a/manual/tunables.texi b/manual/tunables.texi index 7f0246c789..55fe6945ca 100644 --- a/manual/tunables.texi +++ b/manual/tunables.texi @@ -71,6 +71,7 @@ glibc.pthread.mutex_spin_count: 100 (min: 0, max: 32767) glibc.rtld.optional_static_tls: 0x200 (min: 0x0, max: 0xffffffffffffffff) glibc.malloc.tcache_max: 0x0 (min: 0x0, max: 0xffffffffffffffff) glibc.malloc.check: 0 (min: 0, max: 3) +glibc.rtld.protmem: 2 (min: 0, max: 3) @end example @menu @@ -333,6 +334,34 @@ changed once allocated at process startup. The default allocation of optional static TLS is 512 bytes and is allocated in every thread. @end deftp +@deftp Tunable glibc.rtld.protmem +The dynamic linker supports various operating modes for its protected +memory allocator. The following settings are available. + +@table @code +@item 0 +The protected memory allocator is disabled. All memory remains writable +during the life-time of the process. + +@item 1 +The protected memory allocator is enabled and unconditionally uses +@code{mprotect} to switch protections on or off. + +@item 2 +The protected memory allocator is enabled and uses memory protection +keys if supported by the system, and the memory protection key +implementation provides full compatibility. This is the default. + +@item 3 +The protected memory allocator is enabled. If the system supports +memory protection keys, they are used, even if there are +incompatibilities. Such incompatibilities exist on x86-64 because +signal handlers disable access (including read access) to protected +memory, which means that lazy binding will not work from signal handlers +in this mode. +@end table +@end deftp + @deftp Tunable glibc.rtld.dynamic_sort Sets the algorithm to use for DSO sorting, valid values are @samp{1} and @samp{2}. For value of @samp{1}, an older O(n^3) algorithm is used, which is diff --git a/nptl/pthread_create.c b/nptl/pthread_create.c index e1033d4ee6..d0333d75ad 100644 --- a/nptl/pthread_create.c +++ b/nptl/pthread_create.c @@ -380,6 +380,14 @@ start_thread (void *arg) __libc_fatal ("Fatal glibc error: rseq registration failed\n"); } +#ifdef SHARED + /* If the dynamic linker uses memory protection keys, new threads + may have to disable access because clone may have inherited + access if called from an write-enabled code region. */ + if (GLRO (dl_protmem_key) >= 0) + __pkey_set (GLRO (dl_protmem_key), PKEY_DISABLE_WRITE); +#endif + #ifndef __ASSUME_SET_ROBUST_LIST if (__nptl_set_robust_list_avail) #endif diff --git a/sysdeps/generic/dl-protmem-pkey.h b/sysdeps/generic/dl-protmem-pkey.h new file mode 100644 index 0000000000..574bf2536c --- /dev/null +++ b/sysdeps/generic/dl-protmem-pkey.h @@ -0,0 +1,20 @@ +/* Protection key support for the protected memory allocator. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* The generic implementation does not support memory protection keys. */ +#define DL_PROTMEM_PKEY_SUPPORT 0 diff --git a/sysdeps/generic/ldsodefs.h b/sysdeps/generic/ldsodefs.h index c8992b0661..7dd23a9d29 100644 --- a/sysdeps/generic/ldsodefs.h +++ b/sysdeps/generic/ldsodefs.h @@ -665,6 +665,11 @@ struct rtld_global_ro EXTERN enum dso_sort_algorithm _dl_dso_sort_algo; #ifdef SHARED + /* Memory protection key for the memory allocator regions. Used + during thread initialization, to revoke access if necessary. Set + to -1 in _dl_protmem_init if protection keys are not available. */ + EXTERN int _dl_protmem_key; + /* Pointer to the protected memory area. */ EXTERN struct rtld_protmem *_dl_protmem; diff --git a/sysdeps/unix/sysv/linux/dl-protmem-pkey.h b/sysdeps/unix/sysv/linux/dl-protmem-pkey.h new file mode 100644 index 0000000000..4b64d0caf8 --- /dev/null +++ b/sysdeps/unix/sysv/linux/dl-protmem-pkey.h @@ -0,0 +1,23 @@ +/* Protection key support for the protected memory allocator. Linux version. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* Linux supports the pkey_* interfaces. */ +#define DL_PROTMEM_PKEY_SUPPORT 1 + +/* Use a protection key if pkey_alloc succeeds. */ +#define DL_PROTMEM_PKEY_ENABLE 1 diff --git a/sysdeps/unix/sysv/linux/dl-sysdep.c b/sysdeps/unix/sysv/linux/dl-sysdep.c index b746ac2644..c0d633ffcb 100644 --- a/sysdeps/unix/sysv/linux/dl-sysdep.c +++ b/sysdeps/unix/sysv/linux/dl-sysdep.c @@ -41,6 +41,7 @@ #include #include #include +#include #include #include @@ -109,6 +110,7 @@ _dl_sysdep_start (void **start_argptr, dl_hwcap_check (); __tunables_init (_environ); + _dl_protmem_init_2 (); /* Initialize DSO sorting algorithm after tunables. */ _dl_sort_maps_init (); diff --git a/sysdeps/unix/sysv/linux/x86/dl-protmem-pkey.h b/sysdeps/unix/sysv/linux/x86/dl-protmem-pkey.h new file mode 100644 index 0000000000..f5bc54280a --- /dev/null +++ b/sysdeps/unix/sysv/linux/x86/dl-protmem-pkey.h @@ -0,0 +1,26 @@ +/* Protection key support for the protected memory allocator. x86 version. + Copyright (C) 2025 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* Linux supports the pkey_* interfaces. */ +#define DL_PROTMEM_PKEY_SUPPORT 1 + +/* Linux support is incompatible with signal handlers because the + kernel forces PKEY_DISABLE_ACCESS in signal handlers, which breaks + lazy binding and other dynamic linker features. See bug 22396 + comment 7. */ +#define DL_PROTMEM_PKEY_ENABLE 0