From patchwork Fri Nov 20 03:29:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Collingbourne X-Patchwork-Id: 41137 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 7ADD33870876; Fri, 20 Nov 2020 03:30:18 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 7ADD33870876 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1605843018; bh=olEIulL0UmBfXw7U+bYjpZB32LG7gHj2l5yAiBQ4mdA=; h=Date:Subject:To:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=EKFTb35XHRHARbq1gxFrXjq0XT9MW8V5I0MM58wxpgYMRbVJbd0Ilj4wYBD4A8AMe xxoCBGFf394VMSy+/mlIzTiIUx+eKiI/4lUoWw3O7eS6eqq1ZCyt9IGkAG1pbrjqkt DQtKd2aRdE5yGYc5K3HPkl+dOAl97E7g1dX8tnzY= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com [IPv6:2607:f8b0:4864:20::649]) by sourceware.org (Postfix) with ESMTPS id 3336E3850424 for ; Fri, 20 Nov 2020 03:29:57 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 3336E3850424 Received: by mail-pl1-x649.google.com with SMTP id n8so425600plp.3 for ; Thu, 19 Nov 2020 19:29:57 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:sender:date:message-id:mime-version:subject:from :to:cc; bh=olEIulL0UmBfXw7U+bYjpZB32LG7gHj2l5yAiBQ4mdA=; b=r1cuyNmiIUcOXB+X1eotCCE7qQAnFewa64fl+5dXqGogpgu975dbckZBxF1LAzigg8 ZpI6RcbU7guJ2tzWK/jJZCt4TBXQtZHn1gr8vm/jn4AiXNJ/LWoQTBLuoGaZFSiJj35P Rknb3vxGU/UBC4SCZaJGi1f1yoLvF84/xStUTuX6hMW9sEV5czVVmu060IHvuQ+D/Z7w CrerSlC4oZQuNZe3+K2/qsJYlsTFLTrfYhNBRXjhlMepx3GL+GoW258IV7ExX2hJ3JrD Nz1I2T/kSKSQc5hRMUlre1/NjunpkDH/UJtdjv7dr+pVqHOq4SolZanTtIGtwwege7FP 3XUA== X-Gm-Message-State: AOAM532qRlhH1h+7nX4JGj/Ssd8y6kZ96lfFUcIqETCuCbMw+usyez3Y D9cxPH+MVwnq6zgwh+y8iFODKs4= X-Google-Smtp-Source: ABdhPJw3jg+inNwg8DyR7i/DMkUMCDYGNOxEkf/FuSGzCaWI3Fkq4ayoGzR9O2EH56JZhCJgMDhs/Ec= X-Received: from pcc-desktop.svl.corp.google.com ([2620:15c:2ce:0:7220:84ff:fe09:385a]) (user=pcc job=sendgmr) by 2002:a05:6a00:2d9:b029:18e:f4a2:d1a7 with SMTP id b25-20020a056a0002d9b029018ef4a2d1a7mr12106632pft.75.1605842996312; Thu, 19 Nov 2020 19:29:56 -0800 (PST) Date: Thu, 19 Nov 2020 19:29:46 -0800 Message-Id: Mime-Version: 1.0 X-Mailer: git-send-email 2.29.2.454.gaff20da3a2-goog Subject: [PATCH v4 1/2] arm64: mte: make the per-task SCTLR_EL1 field usable elsewhere To: Catalin Marinas , Evgenii Stepanov , Kostya Serebryany , Vincenzo Frascino , Dave Martin , Szabolcs Nagy , Florian Weimer X-Spam-Status: No, score=-19.3 required=5.0 tests=BAYES_00, DKIMWL_WL_MED, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, KAM_MANYTO, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Peter Collingbourne via Libc-alpha From: Peter Collingbourne Reply-To: Peter Collingbourne Cc: libc-alpha@sourceware.org, Peter Collingbourne , Andrey Konovalov , Kevin Brodsky , linux-api@vger.kernel.org, Will Deacon , Linux ARM Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" In an upcoming change we are going to introduce per-task SCTLR_EL1 bits for PAC. Move the existing per-task SCTLR_EL1 field out of the MTE-specific code so that we will be able to use it from both the PAC and MTE code paths and make the task switching code more efficient. Signed-off-by: Peter Collingbourne Link: https://linux-review.googlesource.com/id/Ic65fac78a7926168fa68f9e8da591c9e04ff7278 --- arch/arm64/Kconfig | 4 +++ arch/arm64/include/asm/processor.h | 10 +++++++- arch/arm64/kernel/mte.c | 40 +++++++----------------------- arch/arm64/kernel/process.c | 29 ++++++++++++++++++++++ 4 files changed, 51 insertions(+), 32 deletions(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 1515f6f153a0..68731cf8d822 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -343,6 +343,9 @@ config KASAN_SHADOW_OFFSET default 0xeffffff900000000 if ARM64_VA_BITS_36 && KASAN_SW_TAGS default 0xffffffffffffffff +config ARM64_NEED_SCTLR_USER + bool + source "arch/arm64/Kconfig.platforms" menu "Kernel Features" @@ -1666,6 +1669,7 @@ config ARM64_MTE default y depends on ARM64_AS_HAS_MTE && ARM64_TAGGED_ADDR_ABI select ARCH_USES_HIGH_VMA_FLAGS + select ARM64_NEED_SCTLR_USER help Memory Tagging (part of the ARMv8.5 Extensions) provides architectural support for run-time, always-on detection of diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h index fce8cbecd6bc..4d4f217034f8 100644 --- a/arch/arm64/include/asm/processor.h +++ b/arch/arm64/include/asm/processor.h @@ -153,11 +153,15 @@ struct thread_struct { struct ptrauth_keys_kernel keys_kernel; #endif #ifdef CONFIG_ARM64_MTE - u64 sctlr_tcf0; u64 gcr_user_incl; #endif +#ifdef CONFIG_ARM64_NEED_SCTLR_USER + u64 sctlr_user; +#endif }; +#define SCTLR_USER_MASK SCTLR_EL1_TCF0_MASK + static inline void arch_thread_struct_whitelist(unsigned long *offset, unsigned long *size) { @@ -249,6 +253,10 @@ extern void release_thread(struct task_struct *); unsigned long get_wchan(struct task_struct *p); +#ifdef CONFIG_ARM64_NEED_SCTLR_USER +void set_task_sctlr_el1(u64 sctlr); +#endif + /* Thread switching */ extern struct task_struct *cpu_switch_to(struct task_struct *prev, struct task_struct *next); diff --git a/arch/arm64/kernel/mte.c b/arch/arm64/kernel/mte.c index 52a0638ed967..959c55862131 100644 --- a/arch/arm64/kernel/mte.c +++ b/arch/arm64/kernel/mte.c @@ -72,26 +72,6 @@ int memcmp_pages(struct page *page1, struct page *page2) return ret; } -static void update_sctlr_el1_tcf0(u64 tcf0) -{ - /* ISB required for the kernel uaccess routines */ - sysreg_clear_set(sctlr_el1, SCTLR_EL1_TCF0_MASK, tcf0); - isb(); -} - -static void set_sctlr_el1_tcf0(u64 tcf0) -{ - /* - * mte_thread_switch() checks current->thread.sctlr_tcf0 as an - * optimisation. Disable preemption so that it does not see - * the variable update before the SCTLR_EL1.TCF0 one. - */ - preempt_disable(); - current->thread.sctlr_tcf0 = tcf0; - update_sctlr_el1_tcf0(tcf0); - preempt_enable(); -} - static void update_gcr_el1_excl(u64 incl) { u64 excl = ~incl & SYS_GCR_EL1_EXCL_MASK; @@ -121,7 +101,8 @@ void flush_mte_state(void) write_sysreg_s(0, SYS_TFSRE0_EL1); clear_thread_flag(TIF_MTE_ASYNC_FAULT); /* disable tag checking */ - set_sctlr_el1_tcf0(SCTLR_EL1_TCF0_NONE); + set_task_sctlr_el1((current->thread.sctlr_user & ~SCTLR_EL1_TCF0_MASK) | + SCTLR_EL1_TCF0_NONE); /* reset tag generation mask */ set_gcr_el1_excl(0); } @@ -131,9 +112,6 @@ void mte_thread_switch(struct task_struct *next) if (!system_supports_mte()) return; - /* avoid expensive SCTLR_EL1 accesses if no change */ - if (current->thread.sctlr_tcf0 != next->thread.sctlr_tcf0) - update_sctlr_el1_tcf0(next->thread.sctlr_tcf0); update_gcr_el1_excl(next->thread.gcr_user_incl); } @@ -147,7 +125,7 @@ void mte_suspend_exit(void) long set_mte_ctrl(struct task_struct *task, unsigned long arg) { - u64 tcf0; + u64 sctlr = task->thread.sctlr_user & ~SCTLR_EL1_TCF0_MASK; u64 gcr_incl = (arg & PR_MTE_TAG_MASK) >> PR_MTE_TAG_SHIFT; if (!system_supports_mte()) @@ -155,23 +133,23 @@ long set_mte_ctrl(struct task_struct *task, unsigned long arg) switch (arg & PR_MTE_TCF_MASK) { case PR_MTE_TCF_NONE: - tcf0 = SCTLR_EL1_TCF0_NONE; + sctlr |= SCTLR_EL1_TCF0_NONE; break; case PR_MTE_TCF_SYNC: - tcf0 = SCTLR_EL1_TCF0_SYNC; + sctlr |= SCTLR_EL1_TCF0_SYNC; break; case PR_MTE_TCF_ASYNC: - tcf0 = SCTLR_EL1_TCF0_ASYNC; + sctlr |= SCTLR_EL1_TCF0_ASYNC; break; default: return -EINVAL; } if (task != current) { - task->thread.sctlr_tcf0 = tcf0; + task->thread.sctlr_user = sctlr; task->thread.gcr_user_incl = gcr_incl; } else { - set_sctlr_el1_tcf0(tcf0); + set_task_sctlr_el1(sctlr); set_gcr_el1_excl(gcr_incl); } @@ -187,7 +165,7 @@ long get_mte_ctrl(struct task_struct *task) ret = task->thread.gcr_user_incl << PR_MTE_TAG_SHIFT; - switch (task->thread.sctlr_tcf0) { + switch (task->thread.sctlr_user & SCTLR_EL1_TCF0_MASK) { case SCTLR_EL1_TCF0_NONE: return PR_MTE_TCF_NONE; case SCTLR_EL1_TCF0_SYNC: diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c index a47a40ec6ad9..f3d53d2aad81 100644 --- a/arch/arm64/kernel/process.c +++ b/arch/arm64/kernel/process.c @@ -541,6 +541,29 @@ static void erratum_1418040_thread_switch(struct task_struct *prev, write_sysreg(val, cntkctl_el1); } +#ifdef CONFIG_ARM64_NEED_SCTLR_USER +static void update_sctlr_el1(u64 sctlr) +{ + sysreg_clear_set(sctlr_el1, SCTLR_USER_MASK, sctlr); + + /* ISB required for the kernel uaccess routines when setting TCF0. */ + isb(); +} + +void set_task_sctlr_el1(u64 sctlr) +{ + /* + * __switch_to() checks current->thread.sctlr as an + * optimisation. Disable preemption so that it does not see + * the variable update before the SCTLR_EL1 one. + */ + preempt_disable(); + current->thread.sctlr_user = sctlr; + update_sctlr_el1(sctlr); + preempt_enable(); +} +#endif /* CONFIG_ARM64_NEED_SCTLR_USER */ + /* * Thread switching. */ @@ -566,6 +589,12 @@ __notrace_funcgraph struct task_struct *__switch_to(struct task_struct *prev, */ dsb(ish); +#ifdef CONFIG_ARM64_NEED_SCTLR_USER + /* avoid expensive SCTLR_EL1 accesses if no change */ + if (prev->thread.sctlr_user != next->thread.sctlr_user) + update_sctlr_el1(next->thread.sctlr_user); +#endif + /* * MTE thread switching must happen after the DSB above to ensure that * any asynchronous tag check faults have been logged in the TFSR*_EL1