From patchwork Wed Dec 30 06:59:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 41598 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id DBD363835083; Wed, 30 Dec 2020 06:59:29 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org DBD363835083 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1609311569; bh=xB32DEN/0QeCmmDh19DzQHWnGyjSTbwqfqhmA+VJNfQ=; h=Date:In-Reply-To:References:Subject:To:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=dO79QMZiqVAmG6ffNudGpjBvdzt4xUQKltxCDM89EcK/oxL1Yz001WknlURWt2zQJ 6tNzgUzfrc0Esgm57gjSX38xkljSswA46FNBWt+xDgbP+E4Tj1fnmNB+h0Kv9zh/ux TNXd69cHm/JjT5aRJHIMOwdVc4tdRlYF8OV7KVDw= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pg1-x549.google.com (mail-pg1-x549.google.com [IPv6:2607:f8b0:4864:20::549]) by sourceware.org (Postfix) with ESMTPS id 9AA403857816 for ; Wed, 30 Dec 2020 06:59:26 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 9AA403857816 Received: by mail-pg1-x549.google.com with SMTP id w4so11802567pgc.7 for ; Tue, 29 Dec 2020 22:59:26 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=xB32DEN/0QeCmmDh19DzQHWnGyjSTbwqfqhmA+VJNfQ=; b=nHkgFi8hb2DaL6qhjY3egdZjtC6h8zKaavMvgujETsPjs8u/hHZq6SrjqtD5gI/kWA vM6keyWvfD21HA4VaxZtXpKFEdl6i6Hl9SvUOQUrQ99Sjx+U1qTV/8N2kwDhNLzssMlC 8eCXZHECBPmBi9U+Fhphwv4CpzcEmmqaQfUzClpK1oeGEB2viymvh7Df3gHPgotLol2M dQkxcet66eJnzPvN2A8RNpbuzRZm2mIvfkEnbZQm7h5mipnlEA4gK9ZaTTksk2qCoOQw aJP4XFF/EJmk/pxEDa4wINO6A6qGJTRZvXxbFgxneoJDwZnH+TGn/OWQOI7ThL87IVVv bHcQ== X-Gm-Message-State: AOAM532it4Zfw+/st+zptZjzxQKDacW6tFxsPSyoHvjlZb1TlkYy4yB6 pmCU2s2AY842c5KWDMtpr+tEe9M= X-Google-Smtp-Source: ABdhPJxy39WUZFqte2XILaI9+t/5pcfwQ+IiKOyLF1khioH9ka/NekuehqblZ6Bd3lG/rE9VEJJR0TU= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:0:7220:84ff:fe09:385a]) (user=pcc job=sendgmr) by 2002:a17:902:d202:b029:dc:3b92:f4e0 with SMTP id t2-20020a170902d202b02900dc3b92f4e0mr42674599ply.69.1609311565648; Tue, 29 Dec 2020 22:59:25 -0800 (PST) Date: Tue, 29 Dec 2020 22:59:15 -0800 In-Reply-To: Message-Id: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.29.2.729.g45daf8777d-goog Subject: [PATCH v6 3/3] arm64: pac: Optimize kernel entry/exit key installation code paths To: Catalin Marinas , Evgenii Stepanov , Kostya Serebryany , Vincenzo Frascino , Dave Martin , Szabolcs Nagy , Florian Weimer X-Spam-Status: No, score=-20.1 required=5.0 tests=BAYES_00, DKIMWL_WL_MED, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_MANYTO, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Peter Collingbourne via Libc-alpha From: Peter Collingbourne Reply-To: Peter Collingbourne Cc: libc-alpha@sourceware.org, Peter Collingbourne , Andrey Konovalov , Kevin Brodsky , linux-api@vger.kernel.org, Will Deacon , Linux ARM Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" The kernel does not use any keys besides IA so we don't need to install IB/DA/DB/GA on kernel exit if we arrange to install them on task switch instead, which we can expect to happen an order of magnitude less often. Furthermore we can avoid installing the user IA in the case where the user task has IA disabled and just leave the kernel IA installed. This also lets us avoid needing to install IA on kernel entry. On an Apple M1 under a hypervisor, the overhead of kernel entry/exit has been measured to be reduced by 15.6ns in the case where IA is enabled, and 31.9ns in the case where IA is disabled. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/Ieddf6b580d23c9e0bed45a822dabe72d2ffc9a8e --- arch/arm64/include/asm/asm_pointer_auth.h | 20 +------------ arch/arm64/include/asm/pointer_auth.h | 36 ++++++++++++++++++----- arch/arm64/kernel/asm-offsets.c | 4 --- arch/arm64/kernel/entry.S | 33 ++++++++++++--------- arch/arm64/kernel/pointer_auth.c | 1 + arch/arm64/kernel/process.c | 1 + arch/arm64/kernel/suspend.c | 3 +- 7 files changed, 53 insertions(+), 45 deletions(-) diff --git a/arch/arm64/include/asm/asm_pointer_auth.h b/arch/arm64/include/asm/asm_pointer_auth.h index 52dead2a8640..8ca2dc0661ee 100644 --- a/arch/arm64/include/asm/asm_pointer_auth.h +++ b/arch/arm64/include/asm/asm_pointer_auth.h @@ -13,30 +13,12 @@ * so use the base value of ldp as thread.keys_user and offset as * thread.keys_user.ap*. */ - .macro ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 + .macro __ptrauth_keys_install_user tsk, tmp1, tmp2, tmp3 mov \tmp1, #THREAD_KEYS_USER add \tmp1, \tsk, \tmp1 -alternative_if_not ARM64_HAS_ADDRESS_AUTH - b .Laddr_auth_skip_\@ -alternative_else_nop_endif ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIA] msr_s SYS_APIAKEYLO_EL1, \tmp2 msr_s SYS_APIAKEYHI_EL1, \tmp3 - ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APIB] - msr_s SYS_APIBKEYLO_EL1, \tmp2 - msr_s SYS_APIBKEYHI_EL1, \tmp3 - ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDA] - msr_s SYS_APDAKEYLO_EL1, \tmp2 - msr_s SYS_APDAKEYHI_EL1, \tmp3 - ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APDB] - msr_s SYS_APDBKEYLO_EL1, \tmp2 - msr_s SYS_APDBKEYHI_EL1, \tmp3 -.Laddr_auth_skip_\@: -alternative_if ARM64_HAS_GENERIC_AUTH - ldp \tmp2, \tmp3, [\tmp1, #PTRAUTH_USER_KEY_APGA] - msr_s SYS_APGAKEYLO_EL1, \tmp2 - msr_s SYS_APGAKEYHI_EL1, \tmp3 -alternative_else_nop_endif .endm .macro __ptrauth_keys_install_kernel_nosync tsk, tmp1, tmp2, tmp3 diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 1a85e25d98ba..3e40fc776ea3 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -35,6 +35,25 @@ struct ptrauth_keys_kernel { struct ptrauth_key apia; }; +#define __ptrauth_key_install_nosync(k, v) \ +do { \ + struct ptrauth_key __pki_v = (v); \ + write_sysreg_s(__pki_v.lo, SYS_ ## k ## KEYLO_EL1); \ + write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ +} while (0) + +static inline void ptrauth_keys_install_user(struct ptrauth_keys_user *keys) +{ + if (system_supports_address_auth()) { + __ptrauth_key_install_nosync(APIB, keys->apib); + __ptrauth_key_install_nosync(APDA, keys->apda); + __ptrauth_key_install_nosync(APDB, keys->apdb); + } + + if (system_supports_generic_auth()) + __ptrauth_key_install_nosync(APGA, keys->apga); +} + static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys) { if (system_supports_address_auth()) { @@ -46,14 +65,9 @@ static inline void ptrauth_keys_init_user(struct ptrauth_keys_user *keys) if (system_supports_generic_auth()) get_random_bytes(&keys->apga, sizeof(keys->apga)); -} -#define __ptrauth_key_install_nosync(k, v) \ -do { \ - struct ptrauth_key __pki_v = (v); \ - write_sysreg_s(__pki_v.lo, SYS_ ## k ## KEYLO_EL1); \ - write_sysreg_s(__pki_v.hi, SYS_ ## k ## KEYHI_EL1); \ -} while (0) + ptrauth_keys_install_user(keys); +} static __always_inline void ptrauth_keys_init_kernel(struct ptrauth_keys_kernel *keys) { @@ -81,6 +95,9 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) return ptrauth_clear_pac(ptr); } +#define ptrauth_suspend_exit() \ + ptrauth_keys_install_user(¤t->thread.keys_user) + #define ptrauth_thread_init_user() \ do { \ ptrauth_keys_init_user(¤t->thread.keys_user); \ @@ -92,6 +109,9 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) SCTLR_ELx_ENDA | SCTLR_ELx_ENDB); \ } while (0) +#define ptrauth_thread_switch_user(tsk) \ + ptrauth_keys_install_user(&(tsk)->thread.keys_user) + #define ptrauth_thread_init_kernel(tsk) \ ptrauth_keys_init_kernel(&(tsk)->thread.keys_kernel) #define ptrauth_thread_switch_kernel(tsk) \ @@ -102,8 +122,10 @@ static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) #define ptrauth_set_enabled_keys(tsk, keys, enabled) (-EINVAL) #define ptrauth_get_enabled_keys(tsk) (-EINVAL) #define ptrauth_strip_insn_pac(lr) (lr) +#define ptrauth_suspend_exit() #define ptrauth_thread_init_user() #define ptrauth_thread_init_kernel(tsk) +#define ptrauth_thread_switch_user(tsk) #define ptrauth_thread_switch_kernel(tsk) #endif /* CONFIG_ARM64_PTR_AUTH */ diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index dc0907a0240c..2bc21a71abcf 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -147,10 +147,6 @@ int main(void) #endif #ifdef CONFIG_ARM64_PTR_AUTH DEFINE(PTRAUTH_USER_KEY_APIA, offsetof(struct ptrauth_keys_user, apia)); - DEFINE(PTRAUTH_USER_KEY_APIB, offsetof(struct ptrauth_keys_user, apib)); - DEFINE(PTRAUTH_USER_KEY_APDA, offsetof(struct ptrauth_keys_user, apda)); - DEFINE(PTRAUTH_USER_KEY_APDB, offsetof(struct ptrauth_keys_user, apdb)); - DEFINE(PTRAUTH_USER_KEY_APGA, offsetof(struct ptrauth_keys_user, apga)); DEFINE(PTRAUTH_KERNEL_KEY_APIA, offsetof(struct ptrauth_keys_kernel, apia)); BLANK(); #endif diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 33037121fd0d..933db3f30f7a 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -247,21 +247,26 @@ alternative_else_nop_endif check_mte_async_tcf x19, x22 apply_ssbd 1, x22, x23 - ptrauth_keys_install_kernel_nosync tsk, x20, x22, x23 - #ifdef CONFIG_ARM64_PTR_AUTH alternative_if ARM64_HAS_ADDRESS_AUTH /* * Enable IA for in-kernel PAC if the task had it disabled. Although * this could be implemented with an unconditional MRS which would avoid * a load, this was measured to be slower on Cortex-A75 and Cortex-A76. + * + * Install the kernel IA key only if IA was enabled in the task. If IA + * was disabled on kernel exit then we would have left the kernel IA + * installed so there is no need to install it again. */ ldr x0, [tsk, THREAD_SCTLR_USER] - tbnz x0, SCTLR_ELx_ENIA_SHIFT, 1f + tbz x0, SCTLR_ELx_ENIA_SHIFT, 1f + __ptrauth_keys_install_kernel_nosync tsk, x20, x22, x23 + b 2f +1: mrs x0, sctlr_el1 orr x0, x0, SCTLR_ELx_ENIA msr sctlr_el1, x0 -1: +2: isb alternative_else_nop_endif #endif @@ -368,24 +373,24 @@ alternative_else_nop_endif 3: scs_save tsk, x0 - /* - * No kernel C function calls after this as user keys are set and IA may - * be disabled. - */ - ptrauth_keys_install_user tsk, x0, x1, x2 - #ifdef CONFIG_ARM64_PTR_AUTH alternative_if ARM64_HAS_ADDRESS_AUTH /* - * IA was enabled for in-kernel PAC. Disable it now if needed. - * All other per-task SCTLR bits were updated on task switch. + * IA was enabled for in-kernel PAC. Disable it now if needed, or + * alternatively install the user's IA. All other per-task keys and + * SCTLR bits were updated on task switch. + * + * No kernel C function calls after this. */ ldr x0, [tsk, THREAD_SCTLR_USER] - tbnz x0, SCTLR_ELx_ENIA_SHIFT, 1f + tbz x0, SCTLR_ELx_ENIA_SHIFT, 1f + __ptrauth_keys_install_user tsk, x0, x1, x2 + b 2f +1: mrs x0, sctlr_el1 bic x0, x0, SCTLR_ELx_ENIA msr sctlr_el1, x0 -1: +2: alternative_else_nop_endif #endif diff --git a/arch/arm64/kernel/pointer_auth.c b/arch/arm64/kernel/pointer_auth.c index f03e5bfe4490..60901ab0a7fe 100644 --- a/arch/arm64/kernel/pointer_auth.c +++ b/arch/arm64/kernel/pointer_auth.c @@ -43,6 +43,7 @@ int ptrauth_prctl_reset_keys(struct task_struct *tsk, unsigned long arg) get_random_bytes(&keys->apdb, sizeof(keys->apdb)); if (arg & PR_PAC_APGAKEY) get_random_bytes(&keys->apga, sizeof(keys->apga)); + ptrauth_keys_install_user(keys); return 0; } diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index a6b0edc67e41..c978876ce07f 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -570,6 +570,7 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, entry_task_switch(next); ssbs_thread_switch(next); erratum_1418040_thread_switch(prev, next); + ptrauth_thread_switch_user(next); /* * Complete any pending TLB or cache maintenance on this CPU in case diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c index a67b37a7a47e..c5a6614a41f9 100644 --- a/arch/arm64/kernel/suspend.c +++ b/arch/arm64/kernel/suspend.c @@ -74,8 +74,9 @@ void notrace __cpu_suspend_exit(void) */ spectre_v4_enable_mitigation(NULL); - /* Restore additional MTE-specific configuration */ + /* Restore additional feature-specific configuration */ mte_suspend_exit(); + ptrauth_suspend_exit(); } /*