From patchwork Sat Dec 9 00:19:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "H.J. Lu" X-Patchwork-Id: 81770 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 524623858414 for ; Sat, 9 Dec 2023 00:19:24 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-oi1-x236.google.com (mail-oi1-x236.google.com [IPv6:2607:f8b0:4864:20::236]) by sourceware.org (Postfix) with ESMTPS id C658D3858CD1 for ; Sat, 9 Dec 2023 00:19:04 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org C658D3858CD1 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org C658D3858CD1 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2607:f8b0:4864:20::236 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1702081150; cv=none; b=MXqYGHY1pT8jwJFV978fkBpXqKRQnM4xo9BPWDro+ea85t0s1S9AAETXOnPNRtuwTUSkx3nqqjvmHC+SIODN5wGsVnYaKoR+42acfWdDkWfa6RVgjNkR1mrKJSAWnyt6t3h+jAK0roMS2gfU2zs+vnTwE9K2EOBE5hppODp3lOo= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1702081150; c=relaxed/simple; bh=SjvtGB7V4PihS88HcfOr4bOK3W93/ONSInuhERYkzWU=; h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version; b=BPyxlNNYX26bk+TycW7LzWSXp0yc5wXZLtk01M+pA9oCSzXxsKcbi8gv6/GVD5Sgx7FUcQ6LD1jRnAth5n7i/Wjmbqsidc2XmFp6Iz6i2yBU7KkS/C6MWdt/dU5sK53ziLvT+dMASCpN7nCnGPeRLKYLem6WhNFefoVlD99es50= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-oi1-x236.google.com with SMTP id 5614622812f47-3b9d2b8c3c6so1724000b6e.1 for ; Fri, 08 Dec 2023 16:19:04 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1702081144; x=1702685944; darn=sourceware.org; h=content-transfer-encoding:mime-version:message-id:date:subject:to :from:from:to:cc:subject:date:message-id:reply-to; bh=bJvC1t9fvnANXznmT0WIRE+iHYNOVc6YzOaaloiXyxc=; b=L47gqjhxyqb/DilTy5UuUpJhNpWvIGHS54kSjuI51zuh/Q/abBAIjLJB+0BNFc7Whp rQKWvBsQHo5XGhEYGOsN2HX97I+Wy17MnhKy9ZpoTmKcI1yFnpc/KsrWEFbacn5ClSAX DeXz4A/XLxit5fE0s0tNnzwGLsisquq7HyQQQ2frQAUba+usF+6heIpdj0yOEys4W5pD OcRrQF5wN2SNLmtsRKQuHDG+Fpvn8y6lw23KJrNX2JnI19jerOo5GgJ58CranEty0IIZ RR58KkjQmQNZNYKz9nXQmNBOAp3N5xOVlQF0AiAaUNROAvS7ZR3u+SrSiBXRPV5Fk/lg FLRQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702081144; x=1702685944; h=content-transfer-encoding:mime-version:message-id:date:subject:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bJvC1t9fvnANXznmT0WIRE+iHYNOVc6YzOaaloiXyxc=; b=ZNfnqmpGVu/2d7MDXiRdF2fu1YdGURj9i3WC3UY5+WhldT8vkvIXfeTlbIWLGtOoAH e436n74FRZNJA9Mei9kgb69ZHeOK/ZYji1easCRbSoEu2rgklVFOVFypbs9kMCSYhmaN SKBFO71v8cxJV4QqauHBbN+rqUHzrq7GTfLXjzgWVvt0igNeOo7FVIKaNMiSg6p4ZVkY Sm4tG2eVjt9oR0xq6SoK7YNm+56dRwiSgiagRPHLCv92v6kjvaH6G0cQDDKqCReMd801 9IpQs9Z/rslNXOyFeDSaKEzYh8/lUd7oItVXU3Ycd4ZkZn+i/hV2UCROwmB/GdISRWK2 Y6KA== X-Gm-Message-State: AOJu0YwUAn0Dr7NgVf0BcPXPTCDM4V8y5Hy26VNoLAfXmxukBcHMZSTL 2XTSYaioExgnjA3qzhVHa9zN5SJUcQY= X-Google-Smtp-Source: AGHT+IGvoqLXLTEf5c1yP+xquvGiSz7pUU/kO73Fwode935DJAEqL+MUc0sJHRxfVwalVj7O3EqdkQ== X-Received: by 2002:a05:6808:1708:b0:3b8:b063:825b with SMTP id bc8-20020a056808170800b003b8b063825bmr981801oib.93.1702081143520; Fri, 08 Dec 2023 16:19:03 -0800 (PST) Received: from gnu-cfl-3.localdomain ([172.59.129.147]) by smtp.gmail.com with ESMTPSA id f17-20020a170902ce9100b001cfb99d8b82sm2283955plg.136.2023.12.08.16.19.02 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 08 Dec 2023 16:19:02 -0800 (PST) Received: from gnu-cfl-3.. (localhost [IPv6:::1]) by gnu-cfl-3.localdomain (Postfix) with ESMTP id 3A4FE740466 for ; Fri, 8 Dec 2023 16:19:01 -0800 (PST) From: "H.J. Lu" To: libc-alpha@sourceware.org Subject: [PATCH] elf: Add ELF_DYNAMIC_AFTER_RELOC to rewrite PLT Date: Fri, 8 Dec 2023 16:19:01 -0800 Message-ID: <20231209001901.2140823-1-hjl.tools@gmail.com> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 X-Spam-Status: No, score=-3025.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, KAM_SHORT, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Add ELF_DYNAMIC_AFTER_RELOC to allow target specific processing after relocation. For x86-64, add #define DT_X86_64_PLT (DT_LOPROC + 0) #define DT_X86_64_PLTSZ (DT_LOPROC + 1) #define DT_X86_64_PLTENT (DT_LOPROC + 3) 1. DT_X86_64_PLT: The address of the procedure linkage table. 2. DT_X86_64_PLTSZ: The total size, in bytes, of the procedure linkage table. 3. DT_X86_64_PLTENT: The size, in bytes, of a procedure linkage table entry. With the r_addend field of the R_X86_64_JUMP_SLOT relocation set to the memory offset of the indirect branch instruction. Define ELF_DYNAMIC_AFTER_RELOC for x86-64 to rewrite the PLT section with direct branch after relocation when the lazy binding is disabled. PLT rewrite is disabled by default. Add $ GLIBC_TUNABLES=glibc.cpu.x86_plt_rewrite=1 to enable PLT rewrite at run-time. --- elf/dynamic-link.h | 5 + elf/elf.h | 5 + elf/tst-glibcelf.py | 1 + scripts/glibcelf.py | 4 + sysdeps/x86/cet-control.h | 14 ++ sysdeps/x86/cpu-features.c | 10 ++ sysdeps/x86/dl-procruntime.c | 1 + sysdeps/x86/dl-tunables.list | 5 + sysdeps/x86_64/dl-dtprocnum.h | 21 +++ sysdeps/x86_64/dl-machine.h | 236 +++++++++++++++++++++++++++++++++- sysdeps/x86_64/link_map.h | 22 ++++ 11 files changed, 323 insertions(+), 1 deletion(-) create mode 100644 sysdeps/x86_64/dl-dtprocnum.h create mode 100644 sysdeps/x86_64/link_map.h diff --git a/elf/dynamic-link.h b/elf/dynamic-link.h index e7f755fc75..5351671044 100644 --- a/elf/dynamic-link.h +++ b/elf/dynamic-link.h @@ -177,6 +177,10 @@ elf_machine_lazy_rel (struct link_map *map, struct r_scope_elem *scope[], } \ } while (0); +# ifndef ELF_DYNAMIC_AFTER_RELOC +# define ELF_DYNAMIC_AFTER_RELOC(map, lazy) +# endif + /* This can't just be an inline function because GCC is too dumb to inline functions containing inlines themselves. */ # ifdef RTLD_BOOTSTRAP @@ -192,6 +196,7 @@ elf_machine_lazy_rel (struct link_map *map, struct r_scope_elem *scope[], ELF_DYNAMIC_DO_RELR (map); \ ELF_DYNAMIC_DO_REL ((map), (scope), edr_lazy, skip_ifunc); \ ELF_DYNAMIC_DO_RELA ((map), (scope), edr_lazy, skip_ifunc); \ + ELF_DYNAMIC_AFTER_RELOC ((map), (edr_lazy)); \ } while (0) #endif diff --git a/elf/elf.h b/elf/elf.h index 5c1c1972d1..eda4802f56 100644 --- a/elf/elf.h +++ b/elf/elf.h @@ -3639,6 +3639,11 @@ enum /* x86-64 sh_type values. */ #define SHT_X86_64_UNWIND 0x70000001 /* Unwind information. */ +/* x86-64 d_tag values. */ +#define DT_X86_64_PLT (DT_LOPROC + 0) +#define DT_X86_64_PLTSZ (DT_LOPROC + 1) +#define DT_X86_64_PLTENT (DT_LOPROC + 3) +#define DT_X86_64_NUM 4 /* AM33 relocations. */ #define R_MN10300_NONE 0 /* No reloc. */ diff --git a/elf/tst-glibcelf.py b/elf/tst-glibcelf.py index 6142ca28ae..52293f4adf 100644 --- a/elf/tst-glibcelf.py +++ b/elf/tst-glibcelf.py @@ -187,6 +187,7 @@ DT_VALNUM DT_VALRNGHI DT_VALRNGLO DT_VERSIONTAGNUM +DT_X86_64_NUM ELFCLASSNUM ELFDATANUM EM_NUM diff --git a/scripts/glibcelf.py b/scripts/glibcelf.py index b52e83d613..3a21e25201 100644 --- a/scripts/glibcelf.py +++ b/scripts/glibcelf.py @@ -439,6 +439,8 @@ class DtRISCV(Dt): """Supplemental DT_* constants for EM_RISCV.""" class DtSPARC(Dt): """Supplemental DT_* constants for EM_SPARC.""" +class DtX86_64(Dt): + """Supplemental DT_* constants for EM_X86_64.""" _dt_skip = ''' DT_ENCODING DT_PROCNUM DT_ADDRRNGLO DT_ADDRRNGHI DT_ADDRNUM @@ -451,6 +453,7 @@ DT_MIPS_NUM DT_PPC_NUM DT_PPC64_NUM DT_SPARC_NUM +DT_X86_64_NUM '''.strip().split() _register_elf_h(DtAARCH64, prefix='DT_AARCH64_', skip=_dt_skip, parent=Dt) _register_elf_h(DtALPHA, prefix='DT_ALPHA_', skip=_dt_skip, parent=Dt) @@ -461,6 +464,7 @@ _register_elf_h(DtPPC, prefix='DT_PPC_', skip=_dt_skip, parent=Dt) _register_elf_h(DtPPC64, prefix='DT_PPC64_', skip=_dt_skip, parent=Dt) _register_elf_h(DtRISCV, prefix='DT_RISCV_', skip=_dt_skip, parent=Dt) _register_elf_h(DtSPARC, prefix='DT_SPARC_', skip=_dt_skip, parent=Dt) +_register_elf_h(DtX86_64, prefix='DT_X86_64_', skip=_dt_skip, parent=Dt) _register_elf_h(Dt, skip=_dt_skip, ranges=True) del _dt_skip diff --git a/sysdeps/x86/cet-control.h b/sysdeps/x86/cet-control.h index 3bd00019e8..77f97830da 100644 --- a/sysdeps/x86/cet-control.h +++ b/sysdeps/x86/cet-control.h @@ -32,10 +32,24 @@ enum dl_x86_cet_control cet_permissive }; +/* PLT rewrite control. */ +enum dl_x86_plt_rewrite_control +{ + /* No PLT rewrite. */ + plt_rewrite_none, + /* PLT rewrite is enabled at run-time. */ + plt_rewrite_enabled, + /* Rewrite PLT with JMP at run-time. */ + plt_rewrite_jmp, + /* Rewrite PLT with JMPABS at run-time. */ + plt_rewrite_jmpabs +}; + struct dl_x86_feature_control { enum dl_x86_cet_control ibt : 2; enum dl_x86_cet_control shstk : 2; + enum dl_x86_plt_rewrite_control plt_rewrite : 2; }; #endif /* cet-control.h */ diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index 0bf923d48b..ed18522f5a 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -27,6 +27,13 @@ extern void TUNABLE_CALLBACK (set_hwcaps) (tunable_val_t *) attribute_hidden; +static void +TUNABLE_CALLBACK (set_plt_rewrite) (tunable_val_t *valp) +{ + if (valp->numval) + GL(dl_x86_feature_control).plt_rewrite = plt_rewrite_enabled; +} + #ifdef __LP64__ static void TUNABLE_CALLBACK (set_prefer_map_32bit_exec) (tunable_val_t *valp) @@ -996,6 +1003,9 @@ no_cpuid: TUNABLE_GET (hwcaps, tunable_val_t *, TUNABLE_CALLBACK (set_hwcaps)); + TUNABLE_GET (x86_plt_rewrite, tunable_val_t *, + TUNABLE_CALLBACK (set_plt_rewrite)); + #ifdef __LP64__ TUNABLE_GET (prefer_map_32bit_exec, tunable_val_t *, TUNABLE_CALLBACK (set_prefer_map_32bit_exec)); diff --git a/sysdeps/x86/dl-procruntime.c b/sysdeps/x86/dl-procruntime.c index 2fb682ded3..03a612f3f3 100644 --- a/sysdeps/x86/dl-procruntime.c +++ b/sysdeps/x86/dl-procruntime.c @@ -67,6 +67,7 @@ PROCINFO_CLASS struct dl_x86_feature_control _dl_x86_feature_control = { .ibt = DEFAULT_DL_X86_CET_CONTROL, .shstk = DEFAULT_DL_X86_CET_CONTROL, + .plt_rewrite = plt_rewrite_none, } # endif # if !defined SHARED || defined PROCINFO_DECL diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list index feb7004036..bfbd1d7770 100644 --- a/sysdeps/x86/dl-tunables.list +++ b/sysdeps/x86/dl-tunables.list @@ -66,5 +66,10 @@ glibc { x86_shared_cache_size { type: SIZE_T } + x86_plt_rewrite { + type: INT_32 + minval: 0 + maxval: 1 + } } } diff --git a/sysdeps/x86_64/dl-dtprocnum.h b/sysdeps/x86_64/dl-dtprocnum.h new file mode 100644 index 0000000000..f35341ab1f --- /dev/null +++ b/sysdeps/x86_64/dl-dtprocnum.h @@ -0,0 +1,21 @@ +/* Configuration of lookup functions. x64-64 version. + Copyright (C) 2022 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* Number of extra dynamic section entries for this architecture. By + default there are none. */ +#define DT_THISPROCNUM DT_X86_64_NUM diff --git a/sysdeps/x86_64/dl-machine.h b/sysdeps/x86_64/dl-machine.h index 581a2f1a9e..4a6bb1cf05 100644 --- a/sysdeps/x86_64/dl-machine.h +++ b/sysdeps/x86_64/dl-machine.h @@ -30,6 +30,9 @@ #include #include +/* Translate a processor specific dynamic tag to the index in l_info array. */ +#define DT_X86_64(x) (DT_X86_64_##x - DT_LOPROC + DT_NUM) + /* Return nonzero iff ELF header is compatible with the running host. */ static inline int __attribute__ ((unused)) elf_machine_matches_host (const ElfW(Ehdr) *ehdr) @@ -304,8 +307,9 @@ and creates an unsatisfiable circular dependency.\n", switch (r_type) { - case R_X86_64_GLOB_DAT: case R_X86_64_JUMP_SLOT: + map->l_has_jump_slot_reloc = true; + case R_X86_64_GLOB_DAT: *reloc_addr = value; break; @@ -541,3 +545,233 @@ elf_machine_lazy_rel (struct link_map *map, struct r_scope_elem *scope[], } #endif /* RESOLVE_MAP */ + +#if !defined ELF_DYNAMIC_AFTER_RELOC && !defined RTLD_BOOTSTRAP \ + && defined SHARED +# define ELF_DYNAMIC_AFTER_RELOC(map, lazy) \ + x86_64_dynamic_after_reloc (map, (lazy)) + +static const char * +x86_64_reloc_symbol_name (struct link_map *map, const ElfW(Rela) *reloc) +{ + const ElfW(Sym) *const symtab + = (const void *) map->l_info[DT_SYMTAB]->d_un.d_ptr; + const ElfW(Sym) *const refsym = &symtab[ELFW (R_SYM) (reloc->r_info)]; + const char *strtab = (const char *) map->l_info[DT_STRTAB]->d_un.d_ptr; + return strtab + refsym->st_name; +} + +static void +x86_64_rewrite_plt (struct link_map *map, ElfW(Addr) plt_rewrite, + ElfW(Addr) plt_aligned) +{ + ElfW(Addr) plt_rewrite_bias = plt_rewrite - plt_aligned; + ElfW(Addr) l_addr = map->l_addr; + ElfW(Addr) pltent = map->l_info[DT_X86_64 (PLTENT)]->d_un.d_val; + ElfW(Addr) start = map->l_info[DT_JMPREL]->d_un.d_ptr; + ElfW(Addr) size = map->l_info[DT_PLTRELSZ]->d_un.d_val; + const ElfW(Rela) *reloc = (const void *) start; + const ElfW(Rela) *reloc_end = (const void *) (start + size); + + unsigned int feature_1 = THREAD_GETMEM (THREAD_SELF, + header.feature_1); + bool ibt_enabled_p + = (feature_1 & GNU_PROPERTY_X86_FEATURE_1_IBT) != 0; + + if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_FILES)) + _dl_debug_printf ("\nchanging PLT for '%s' to direct branch\n", + DSO_FILENAME (map->l_name)); + + for (; reloc < reloc_end; reloc++) + if (ELFW(R_TYPE) (reloc->r_info) == R_X86_64_JUMP_SLOT) + { + /* Get the value from the GOT entry. */ + ElfW(Addr) value = *(ElfW(Addr) *) (l_addr + reloc->r_offset); + + /* Get the corresponding PLT entry from r_addend. */ + ElfW(Addr) branch_start = l_addr + reloc->r_addend; + /* Skip ENDBR64 if IBT isn't enabled. */ + if (!ibt_enabled_p) + branch_start = ALIGN_DOWN (branch_start, pltent); + /* Get the displacement from the branch target. */ + ElfW(Addr) disp = value - branch_start - 5; + ElfW(Addr) plt_end; + ElfW(Addr) pad; + + branch_start += plt_rewrite_bias; + plt_end = (branch_start & -pltent) + pltent; + + /* Update the PLT entry. */ + if ((disp + 0x80000000ULL) <= 0xffffffffULL) + { + /* If the target branch can be reached with a direct branch, + rewrite the PLT entry with a direct branch. */ + if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_BINDINGS)) + { + const char *sym_name = x86_64_reloc_symbol_name (map, + reloc); + _dl_debug_printf ("changing '%s' PLT for '%s' to " + "direct branch\n", sym_name, + DSO_FILENAME (map->l_name)); + } + + pad = branch_start + 5; + + if (__glibc_unlikely (pad > plt_end)) + { + if (__glibc_unlikely (GLRO(dl_debug_mask) + & DL_DEBUG_BINDINGS)) + { + const char *sym_name + = x86_64_reloc_symbol_name (map, reloc); + _dl_debug_printf ("\ninvalid r_addend of " + "R_X86_64_JUMP_SLOT against '%s' " + "in '%s'\n", sym_name, + DSO_FILENAME (map->l_name)); + } + + continue; + } + + /* Write out direct branch. */ + *(uint8_t *) branch_start = 0xe9; + *((uint32_t *) (branch_start + 1)) = disp; + } + else + { + if (GL(dl_x86_feature_control).plt_rewrite + != plt_rewrite_jmpabs) + continue; + + pad = branch_start + 11; + + if (pad > plt_end) + continue; + + /* Rewrite the PLT entry with JMPABS. */ + if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_BINDINGS)) + { + const char *sym_name = x86_64_reloc_symbol_name (map, + reloc); + _dl_debug_printf ("changing '%s' PLT for '%s' to JMPABS\n", + sym_name, DSO_FILENAME (map->l_name)); + } + + /* "jmpabs $target" for 64-bit displacement. */ + *(uint8_t *) (branch_start + 0) = 0xd5; + *(uint8_t *) (branch_start + 1) = 0x0; + *(uint8_t *) (branch_start + 2) = 0xa1; + *(uint64_t *) (branch_start + 3) = value; + } + + /* Fill the unused part of the PLT entry with INT3. */ + for (; pad < plt_end; pad++) + *(uint8_t *) pad = 0xcc; + } +} + +static inline void +x86_64_rewrite_plt_in_place (struct link_map *map) +{ + /* Adjust DT_X86_64_PLT address and DT_X86_64_PLTSZ values. */ + ElfW(Addr) plt = (map->l_info[DT_X86_64 (PLT)]->d_un.d_ptr + + map->l_addr); + size_t pagesize = GLRO(dl_pagesize); + ElfW(Addr) plt_aligned = ALIGN_DOWN (plt, pagesize); + size_t pltsz = (map->l_info[DT_X86_64 (PLTSZ)]->d_un.d_val + + plt - plt_aligned); + + if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_FILES)) + _dl_debug_printf ("\nchanging PLT in '%s' to writable\n", + DSO_FILENAME (map->l_name)); + + if (__glibc_unlikely (__mprotect ((void *) plt_aligned, pltsz, + PROT_WRITE | PROT_READ) < 0)) + { + if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_FILES)) + _dl_debug_printf ("\nfailed to change PLT in '%s' to writable\n", + DSO_FILENAME (map->l_name)); + return; + } + + x86_64_rewrite_plt (map, plt_aligned, plt_aligned); + + if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_FILES)) + _dl_debug_printf ("\nchanging PLT in '%s' back to read-only\n", + DSO_FILENAME (map->l_name)); + + if (__glibc_unlikely (__mprotect ((void *) plt_aligned, pltsz, + PROT_EXEC | PROT_READ) < 0)) + _dl_signal_error (0, DSO_FILENAME (map->l_name), NULL, + "failed to change PLT back to read-only"); +} + +/* Rewrite PLT entries to direct branch if possible. */ + +static inline void +x86_64_dynamic_after_reloc (struct link_map *map, int lazy) +{ + /* Ignore DT_X86_64_PLT if the lazy binding is enabled. */ + if (lazy) + return; + + if (__glibc_likely (map->l_info[DT_X86_64 (PLT)] == NULL)) + return; + + /* Ignore DT_X86_64_PLT if there is no R_X86_64_JUMP_SLOT. */ + if (!map->l_has_jump_slot_reloc) + return; + + /* Ignore DT_X86_64_PLT on ld.so to avoid changing its own PLT. */ + if (map == &GL(dl_rtld_map) || map->l_real == &GL(dl_rtld_map)) + return; + + /* Ignore DT_X86_64_PLT if + 1. DT_JMPREL isn't available or its value is 0. + 2. DT_PLTRELSZ is 0. + 3. DT_X86_64_PLTSZ isn't available or its value is 0. + 4. DT_X86_64_PLTENT isn't available or its value is smaller + than 16 bytes. */ + if (map->l_info[DT_JMPREL] == NULL + || map->l_info[DT_JMPREL]->d_un.d_ptr == 0 + || map->l_info[DT_PLTRELSZ]->d_un.d_val == 0 + || map->l_info[DT_X86_64 (PLTSZ)] == NULL + || map->l_info[DT_X86_64 (PLTSZ)]->d_un.d_val == 0 + || map->l_info[DT_X86_64 (PLTENT)] == NULL + || map->l_info[DT_X86_64 (PLTENT)]->d_un.d_val < 16) + return; + + if (GL(dl_x86_feature_control).plt_rewrite == plt_rewrite_enabled) + { + /* PLT rewrite is enabled. Check if mprotect works. */ + void *plt = __mmap (NULL, 4096, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, + -1, 0); + if (__glibc_unlikely (plt == MAP_FAILED)) + GL(dl_x86_feature_control).plt_rewrite = plt_rewrite_none; + else + { + *(int32_t *) plt = -1; + + /* If the memory can be changed to PROT_EXEC | PROT_READ, + rewrite PLT. */ + if (__mprotect (plt, 4096, PROT_EXEC | PROT_READ) == 0) + /* Use JMPABS on APX processors. */ + GL(dl_x86_feature_control).plt_rewrite + = (CPU_FEATURE_PRESENT_P (__get_cpu_features (), APX_F) + ? plt_rewrite_jmpabs + : plt_rewrite_jmp); + else + GL(dl_x86_feature_control).plt_rewrite = plt_rewrite_none; + + __munmap (plt, 4096); + } + } + + /* Ignore DT_X86_64_PLT if PLT rewrite isn't enabled. */ + if (GL(dl_x86_feature_control).plt_rewrite == plt_rewrite_none) + return; + + x86_64_rewrite_plt_in_place (map); +} +#endif diff --git a/sysdeps/x86_64/link_map.h b/sysdeps/x86_64/link_map.h new file mode 100644 index 0000000000..ddb8e78077 --- /dev/null +++ b/sysdeps/x86_64/link_map.h @@ -0,0 +1,22 @@ +/* Additional fields in struct link_map. x86-64 version. + Copyright (C) 2023 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* Has R_X86_64_JUMP_SLOT relocation. */ +bool l_has_jump_slot_reloc; + +#include