[v5,15/30] arm64/sve: Signal handling support

Message ID 1509465082-30427-16-git-send-email-Dave.Martin@arm.com
State New, archived
Headers

Commit Message

Dave Martin Oct. 31, 2017, 3:51 p.m. UTC
  This patch implements support for saving and restoring the SVE
registers around signals.

A fixed-size header struct sve_context is always included in the
signal frame encoding the thread's vector length at the time of
signal delivery, optionally followed by a variable-layout structure
encoding the SVE registers.

Because of the need to preserve backwards compatibility, the FPSIMD
view of the SVE registers is always dumped as a struct
fpsimd_context in the usual way, in addition to any sve_context.

The SVE vector registers are dumped in full, including bits 127:0
of each register which alias the corresponding FPSIMD vector
registers in the hardware.  To avoid any ambiguity about which
alias to restore during sigreturn, the kernel always restores bits
127:0 of each SVE vector register from the fpsimd_context in the
signal frame (which must be present): userspace needs to take this
into account if it wants to modify the SVE vector register contents
on return from a signal.

FPSR and FPCR, which are used by both FPSIMD and SVE, are not
included in sve_context because they are always present in
fpsimd_context anyway.

For signal delivery, a new helper
fpsimd_signal_preserve_current_state() is added to update _both_
the FPSIMD and SVE views in the task struct, to make it easier to
populate this information into the signal frame.  Because of the
redundancy between the two views of the state, only one is updated
otherwise.

Signed-off-by: Dave Martin <Dave.Martin@arm.com>
Cc: Alex Bennée <alex.bennee@linaro.org>
Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>

---

**Dropped** Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
(Non-trivial changes.)

Changes since v4
----------------

Requested by Will Deacon:

 * Fix inconsistent return semantics in restore_sve_fpsimd_context().

   Currently a nonzero return value from __copy_from_user() is passed
   back as-is to the caller or restore_sve_fpsimd_context(), rather than
   translating it do an error code as is done elsewhere.

   Callers of restore_sve_fpsimd_context() only care whether the return
   value is 0 or not, so this isn't a big deal, but it is more
   consistent to return proper error codes on failure in case we grow a
   use for them later.

   This patch returns -EFAULT in the __copy_from_user() error cases
   that weren't explicitly handled.

Miscellaneous:

 * Change inconsistent copy_to_user() calls to __copy_to_user() in
   preserve_sve_context().

   There are already __put_user_error() calls here.

   The whole extended signal frame is already checked for
   access_ok(VERIFY_WRITE) in get_sigframe().
---
 arch/arm64/include/asm/fpsimd.h |   1 +
 arch/arm64/kernel/fpsimd.c      |  55 ++++++++++---
 arch/arm64/kernel/signal.c      | 167 ++++++++++++++++++++++++++++++++++++++--
 arch/arm64/kernel/signal32.c    |   2 +-
 4 files changed, 206 insertions(+), 19 deletions(-)
  

Comments

Catalin Marinas Nov. 1, 2017, 2:33 p.m. UTC | #1
On Tue, Oct 31, 2017 at 03:51:07PM +0000, Dave P Martin wrote:
> This patch implements support for saving and restoring the SVE
> registers around signals.
> 
> A fixed-size header struct sve_context is always included in the
> signal frame encoding the thread's vector length at the time of
> signal delivery, optionally followed by a variable-layout structure
> encoding the SVE registers.
> 
> Because of the need to preserve backwards compatibility, the FPSIMD
> view of the SVE registers is always dumped as a struct
> fpsimd_context in the usual way, in addition to any sve_context.
> 
> The SVE vector registers are dumped in full, including bits 127:0
> of each register which alias the corresponding FPSIMD vector
> registers in the hardware.  To avoid any ambiguity about which
> alias to restore during sigreturn, the kernel always restores bits
> 127:0 of each SVE vector register from the fpsimd_context in the
> signal frame (which must be present): userspace needs to take this
> into account if it wants to modify the SVE vector register contents
> on return from a signal.
> 
> FPSR and FPCR, which are used by both FPSIMD and SVE, are not
> included in sve_context because they are always present in
> fpsimd_context anyway.
> 
> For signal delivery, a new helper
> fpsimd_signal_preserve_current_state() is added to update _both_
> the FPSIMD and SVE views in the task struct, to make it easier to
> populate this information into the signal frame.  Because of the
> redundancy between the two views of the state, only one is updated
> otherwise.
> 
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Cc: Alex Bennée <alex.bennee@linaro.org>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
  
Alex Bennée Nov. 7, 2017, 1:22 p.m. UTC | #2
Dave Martin <Dave.Martin@arm.com> writes:

> This patch implements support for saving and restoring the SVE
> registers around signals.
>
> A fixed-size header struct sve_context is always included in the
> signal frame encoding the thread's vector length at the time of
> signal delivery, optionally followed by a variable-layout structure
> encoding the SVE registers.
>
> Because of the need to preserve backwards compatibility, the FPSIMD
> view of the SVE registers is always dumped as a struct
> fpsimd_context in the usual way, in addition to any sve_context.
>
> The SVE vector registers are dumped in full, including bits 127:0
> of each register which alias the corresponding FPSIMD vector
> registers in the hardware.  To avoid any ambiguity about which
> alias to restore during sigreturn, the kernel always restores bits
> 127:0 of each SVE vector register from the fpsimd_context in the
> signal frame (which must be present): userspace needs to take this
> into account if it wants to modify the SVE vector register contents
> on return from a signal.
>
> FPSR and FPCR, which are used by both FPSIMD and SVE, are not
> included in sve_context because they are always present in
> fpsimd_context anyway.
>
> For signal delivery, a new helper
> fpsimd_signal_preserve_current_state() is added to update _both_
> the FPSIMD and SVE views in the task struct, to make it easier to
> populate this information into the signal frame.  Because of the
> redundancy between the two views of the state, only one is updated
> otherwise.
>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Cc: Alex Bennée <alex.bennee@linaro.org>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>

After adding SVE support for RISU I was able to confirm consistent
signal delivery with a decent context over multiple runs and after
hacking echo "32" > /proc/sys/abi/sve_default_vector_length to change
the default length the tests would fail (as expected).

Tested-by: Alex Bennée <alex.bennee@linaro.org>

>
> ---
>
> **Dropped** Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> (Non-trivial changes.)
>
> Changes since v4
> ----------------
>
> Requested by Will Deacon:
>
>  * Fix inconsistent return semantics in restore_sve_fpsimd_context().
>
>    Currently a nonzero return value from __copy_from_user() is passed
>    back as-is to the caller or restore_sve_fpsimd_context(), rather than
>    translating it do an error code as is done elsewhere.
>
>    Callers of restore_sve_fpsimd_context() only care whether the return
>    value is 0 or not, so this isn't a big deal, but it is more
>    consistent to return proper error codes on failure in case we grow a
>    use for them later.
>
>    This patch returns -EFAULT in the __copy_from_user() error cases
>    that weren't explicitly handled.
>
> Miscellaneous:
>
>  * Change inconsistent copy_to_user() calls to __copy_to_user() in
>    preserve_sve_context().
>
>    There are already __put_user_error() calls here.
>
>    The whole extended signal frame is already checked for
>    access_ok(VERIFY_WRITE) in get_sigframe().
> ---
>  arch/arm64/include/asm/fpsimd.h |   1 +
>  arch/arm64/kernel/fpsimd.c      |  55 ++++++++++---
>  arch/arm64/kernel/signal.c      | 167 ++++++++++++++++++++++++++++++++++++++--
>  arch/arm64/kernel/signal32.c    |   2 +-
>  4 files changed, 206 insertions(+), 19 deletions(-)
>
> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> index 5655fe1..9bbd74c 100644
> --- a/arch/arm64/include/asm/fpsimd.h
> +++ b/arch/arm64/include/asm/fpsimd.h
> @@ -63,6 +63,7 @@ extern void fpsimd_load_state(struct fpsimd_state *state);
>  extern void fpsimd_thread_switch(struct task_struct *next);
>  extern void fpsimd_flush_thread(void);
>
> +extern void fpsimd_signal_preserve_current_state(void);
>  extern void fpsimd_preserve_current_state(void);
>  extern void fpsimd_restore_current_state(void);
>  extern void fpsimd_update_current_state(struct fpsimd_state *state);
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index c91be8e..e0b5ef5 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -299,6 +299,32 @@ static void fpsimd_to_sve(struct task_struct *task)
>  		       sizeof(fst->vregs[i]));
>  }
>
> +/*
> + * Transfer the SVE state in task->thread.sve_state to
> + * task->thread.fpsimd_state.
> + *
> + * Task can be a non-runnable task, or current.  In the latter case,
> + * softirqs (and preemption) must be disabled.
> + * task->thread.sve_state must point to at least sve_state_size(task)
> + * bytes of allocated kernel memory.
> + * task->thread.sve_state must be up to date before calling this function.
> + */
> +static void sve_to_fpsimd(struct task_struct *task)
> +{
> +	unsigned int vq;
> +	void const *sst = task->thread.sve_state;
> +	struct fpsimd_state *fst = &task->thread.fpsimd_state;
> +	unsigned int i;
> +
> +	if (!system_supports_sve())
> +		return;
> +
> +	vq = sve_vq_from_vl(task->thread.sve_vl);
> +	for (i = 0; i < 32; ++i)
> +		memcpy(&fst->vregs[i], ZREG(sst, vq, i),
> +		       sizeof(fst->vregs[i]));
> +}
> +
>  #ifdef CONFIG_ARM64_SVE
>
>  /*
> @@ -500,9 +526,6 @@ void fpsimd_flush_thread(void)
>  /*
>   * Save the userland FPSIMD state of 'current' to memory, but only if the state
>   * currently held in the registers does in fact belong to 'current'
> - *
> - * Currently, SVE tasks can't exist, so just WARN in that case.
> - * Subsequent patches will add full SVE support here.
>   */
>  void fpsimd_preserve_current_state(void)
>  {
> @@ -510,16 +533,23 @@ void fpsimd_preserve_current_state(void)
>  		return;
>
>  	local_bh_disable();
> -
> -	if (!test_thread_flag(TIF_FOREIGN_FPSTATE))
> -		fpsimd_save_state(&current->thread.fpsimd_state);
> -
> -	WARN_ON_ONCE(test_and_clear_thread_flag(TIF_SVE));
> -
> +	task_fpsimd_save();
>  	local_bh_enable();
>  }
>
>  /*
> + * Like fpsimd_preserve_current_state(), but ensure that
> + * current->thread.fpsimd_state is updated so that it can be copied to
> + * the signal frame.
> + */
> +void fpsimd_signal_preserve_current_state(void)
> +{
> +	fpsimd_preserve_current_state();
> +	if (system_supports_sve() && test_thread_flag(TIF_SVE))
> +		sve_to_fpsimd(current);
> +}
> +
> +/*
>   * Load the userland FPSIMD state of 'current' from memory, but only if the
>   * FPSIMD state already held in the registers is /not/ the most recent FPSIMD
>   * state of 'current'
> @@ -554,7 +584,12 @@ void fpsimd_update_current_state(struct fpsimd_state *state)
>
>  	local_bh_disable();
>
> -	fpsimd_load_state(state);
> +	if (system_supports_sve() && test_thread_flag(TIF_SVE)) {
> +		current->thread.fpsimd_state = *state;
> +		fpsimd_to_sve(current);
> +	}
> +	task_fpsimd_load();
> +
>  	if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) {
>  		struct fpsimd_state *st = &current->thread.fpsimd_state;
>
> diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
> index 4716729..64abf88 100644
> --- a/arch/arm64/kernel/signal.c
> +++ b/arch/arm64/kernel/signal.c
> @@ -63,6 +63,7 @@ struct rt_sigframe_user_layout {
>
>  	unsigned long fpsimd_offset;
>  	unsigned long esr_offset;
> +	unsigned long sve_offset;
>  	unsigned long extra_offset;
>  	unsigned long end_offset;
>  };
> @@ -179,9 +180,6 @@ static int preserve_fpsimd_context(struct fpsimd_context __user *ctx)
>  	struct fpsimd_state *fpsimd = &current->thread.fpsimd_state;
>  	int err;
>
> -	/* dump the hardware registers to the fpsimd_state structure */
> -	fpsimd_preserve_current_state();
> -
>  	/* copy the FP and status/control registers */
>  	err = __copy_to_user(ctx->vregs, fpsimd->vregs, sizeof(fpsimd->vregs));
>  	__put_user_error(fpsimd->fpsr, &ctx->fpsr, err);
> @@ -214,6 +212,8 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx)
>  	__get_user_error(fpsimd.fpsr, &ctx->fpsr, err);
>  	__get_user_error(fpsimd.fpcr, &ctx->fpcr, err);
>
> +	clear_thread_flag(TIF_SVE);
> +
>  	/* load the hardware registers from the fpsimd_state structure */
>  	if (!err)
>  		fpsimd_update_current_state(&fpsimd);
> @@ -221,10 +221,118 @@ static int restore_fpsimd_context(struct fpsimd_context __user *ctx)
>  	return err ? -EFAULT : 0;
>  }
>
> +
>  struct user_ctxs {
>  	struct fpsimd_context __user *fpsimd;
> +	struct sve_context __user *sve;
>  };
>
> +#ifdef CONFIG_ARM64_SVE
> +
> +static int preserve_sve_context(struct sve_context __user *ctx)
> +{
> +	int err = 0;
> +	u16 reserved[ARRAY_SIZE(ctx->__reserved)];
> +	unsigned int vl = current->thread.sve_vl;
> +	unsigned int vq = 0;
> +
> +	if (test_thread_flag(TIF_SVE))
> +		vq = sve_vq_from_vl(vl);
> +
> +	memset(reserved, 0, sizeof(reserved));
> +
> +	__put_user_error(SVE_MAGIC, &ctx->head.magic, err);
> +	__put_user_error(round_up(SVE_SIG_CONTEXT_SIZE(vq), 16),
> +			 &ctx->head.size, err);
> +	__put_user_error(vl, &ctx->vl, err);
> +	BUILD_BUG_ON(sizeof(ctx->__reserved) != sizeof(reserved));
> +	err |= __copy_to_user(&ctx->__reserved, reserved, sizeof(reserved));
> +
> +	if (vq) {
> +		/*
> +		 * This assumes that the SVE state has already been saved to
> +		 * the task struct by calling preserve_fpsimd_context().
> +		 */
> +		err |= __copy_to_user((char __user *)ctx + SVE_SIG_REGS_OFFSET,
> +				      current->thread.sve_state,
> +				      SVE_SIG_REGS_SIZE(vq));
> +	}
> +
> +	return err ? -EFAULT : 0;
> +}
> +
> +static int restore_sve_fpsimd_context(struct user_ctxs *user)
> +{
> +	int err;
> +	unsigned int vq;
> +	struct fpsimd_state fpsimd;
> +	struct sve_context sve;
> +
> +	if (__copy_from_user(&sve, user->sve, sizeof(sve)))
> +		return -EFAULT;
> +
> +	if (sve.vl != current->thread.sve_vl)
> +		return -EINVAL;
> +
> +	if (sve.head.size <= sizeof(*user->sve)) {
> +		clear_thread_flag(TIF_SVE);
> +		goto fpsimd_only;
> +	}
> +
> +	vq = sve_vq_from_vl(sve.vl);
> +
> +	if (sve.head.size < SVE_SIG_CONTEXT_SIZE(vq))
> +		return -EINVAL;
> +
> +	/*
> +	 * Careful: we are about __copy_from_user() directly into
> +	 * thread.sve_state with preemption enabled, so protection is
> +	 * needed to prevent a racing context switch from writing stale
> +	 * registers back over the new data.
> +	 */
> +
> +	fpsimd_flush_task_state(current);
> +	barrier();
> +	/* From now, fpsimd_thread_switch() won't clear TIF_FOREIGN_FPSTATE */
> +
> +	set_thread_flag(TIF_FOREIGN_FPSTATE);
> +	barrier();
> +	/* From now, fpsimd_thread_switch() won't touch thread.sve_state */
> +
> +	sve_alloc(current);
> +	err = __copy_from_user(current->thread.sve_state,
> +			       (char __user const *)user->sve +
> +					SVE_SIG_REGS_OFFSET,
> +			       SVE_SIG_REGS_SIZE(vq));
> +	if (err)
> +		return -EFAULT;
> +
> +	set_thread_flag(TIF_SVE);
> +
> +fpsimd_only:
> +	/* copy the FP and status/control registers */
> +	/* restore_sigframe() already checked that user->fpsimd != NULL. */
> +	err = __copy_from_user(fpsimd.vregs, user->fpsimd->vregs,
> +			       sizeof(fpsimd.vregs));
> +	__get_user_error(fpsimd.fpsr, &user->fpsimd->fpsr, err);
> +	__get_user_error(fpsimd.fpcr, &user->fpsimd->fpcr, err);
> +
> +	/* load the hardware registers from the fpsimd_state structure */
> +	if (!err)
> +		fpsimd_update_current_state(&fpsimd);
> +
> +	return err ? -EFAULT : 0;
> +}
> +
> +#else /* ! CONFIG_ARM64_SVE */
> +
> +/* Turn any non-optimised out attempts to use these into a link error: */
> +extern int preserve_sve_context(void __user *ctx);
> +extern int restore_sve_fpsimd_context(struct user_ctxs *user);
> +
> +#endif /* ! CONFIG_ARM64_SVE */
> +
> +
>  static int parse_user_sigframe(struct user_ctxs *user,
>  			       struct rt_sigframe __user *sf)
>  {
> @@ -237,6 +345,7 @@ static int parse_user_sigframe(struct user_ctxs *user,
>  	char const __user *const sfp = (char const __user *)sf;
>
>  	user->fpsimd = NULL;
> +	user->sve = NULL;
>
>  	if (!IS_ALIGNED((unsigned long)base, 16))
>  		goto invalid;
> @@ -287,6 +396,19 @@ static int parse_user_sigframe(struct user_ctxs *user,
>  			/* ignore */
>  			break;
>
> +		case SVE_MAGIC:
> +			if (!system_supports_sve())
> +				goto invalid;
> +
> +			if (user->sve)
> +				goto invalid;
> +
> +			if (size < sizeof(*user->sve))
> +				goto invalid;
> +
> +			user->sve = (struct sve_context __user *)head;
> +			break;
> +
>  		case EXTRA_MAGIC:
>  			if (have_extra_context)
>  				goto invalid;
> @@ -363,9 +485,6 @@ static int parse_user_sigframe(struct user_ctxs *user,
>  	}
>
>  done:
> -	if (!user->fpsimd)
> -		goto invalid;
> -
>  	return 0;
>
>  invalid:
> @@ -399,8 +518,19 @@ static int restore_sigframe(struct pt_regs *regs,
>  	if (err == 0)
>  		err = parse_user_sigframe(&user, sf);
>
> -	if (err == 0)
> -		err = restore_fpsimd_context(user.fpsimd);
> +	if (err == 0) {
> +		if (!user.fpsimd)
> +			return -EINVAL;
> +
> +		if (user.sve) {
> +			if (!system_supports_sve())
> +				return -EINVAL;
> +
> +			err = restore_sve_fpsimd_context(&user);
> +		} else {
> +			err = restore_fpsimd_context(user.fpsimd);
> +		}
> +	}
>
>  	return err;
>  }
> @@ -459,6 +589,18 @@ static int setup_sigframe_layout(struct rt_sigframe_user_layout *user)
>  			return err;
>  	}
>
> +	if (system_supports_sve()) {
> +		unsigned int vq = 0;
> +
> +		if (test_thread_flag(TIF_SVE))
> +			vq = sve_vq_from_vl(current->thread.sve_vl);
> +
> +		err = sigframe_alloc(user, &user->sve_offset,
> +				     SVE_SIG_CONTEXT_SIZE(vq));
> +		if (err)
> +			return err;
> +	}
> +
>  	return sigframe_alloc_end(user);
>  }
>
> @@ -500,6 +642,13 @@ static int setup_sigframe(struct rt_sigframe_user_layout *user,
>  		__put_user_error(current->thread.fault_code, &esr_ctx->esr, err);
>  	}
>
> +	/* Scalable Vector Extension state, if present */
> +	if (system_supports_sve() && err == 0 && user->sve_offset) {
> +		struct sve_context __user *sve_ctx =
> +			apply_user_offset(user, user->sve_offset);
> +		err |= preserve_sve_context(sve_ctx);
> +	}
> +
>  	if (err == 0 && user->extra_offset) {
>  		char __user *sfp = (char __user *)user->sigframe;
>  		char __user *userp =
> @@ -599,6 +748,8 @@ static int setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
>  	struct rt_sigframe __user *frame;
>  	int err = 0;
>
> +	fpsimd_signal_preserve_current_state();
> +
>  	if (get_sigframe(&user, ksig, regs))
>  		return 1;
>
> diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
> index e09bf5d..22711ee 100644
> --- a/arch/arm64/kernel/signal32.c
> +++ b/arch/arm64/kernel/signal32.c
> @@ -239,7 +239,7 @@ static int compat_preserve_vfp_context(struct compat_vfp_sigframe __user *frame)
>  	 * Note that this also saves V16-31, which aren't visible
>  	 * in AArch32.
>  	 */
> -	fpsimd_preserve_current_state();
> +	fpsimd_signal_preserve_current_state();
>
>  	/* Place structure header on the stack */
>  	__put_user_error(magic, &frame->magic, err);


--
Alex Bennée
  
Dave Martin Nov. 8, 2017, 4:11 p.m. UTC | #3
On Tue, Nov 07, 2017 at 01:22:33PM +0000, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:
> 
> > This patch implements support for saving and restoring the SVE
> > registers around signals.
> >
> > A fixed-size header struct sve_context is always included in the
> > signal frame encoding the thread's vector length at the time of
> > signal delivery, optionally followed by a variable-layout structure
> > encoding the SVE registers.
> >
> > Because of the need to preserve backwards compatibility, the FPSIMD
> > view of the SVE registers is always dumped as a struct
> > fpsimd_context in the usual way, in addition to any sve_context.
> >
> > The SVE vector registers are dumped in full, including bits 127:0
> > of each register which alias the corresponding FPSIMD vector
> > registers in the hardware.  To avoid any ambiguity about which
> > alias to restore during sigreturn, the kernel always restores bits
> > 127:0 of each SVE vector register from the fpsimd_context in the
> > signal frame (which must be present): userspace needs to take this
> > into account if it wants to modify the SVE vector register contents
> > on return from a signal.
> >
> > FPSR and FPCR, which are used by both FPSIMD and SVE, are not
> > included in sve_context because they are always present in
> > fpsimd_context anyway.
> >
> > For signal delivery, a new helper
> > fpsimd_signal_preserve_current_state() is added to update _both_
> > the FPSIMD and SVE views in the task struct, to make it easier to
> > populate this information into the signal frame.  Because of the
> > redundancy between the two views of the state, only one is updated
> > otherwise.
> >
> > Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> > Cc: Alex Bennée <alex.bennee@linaro.org>
> > Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> > Cc: Will Deacon <will.deacon@arm.com>
> 
> After adding SVE support for RISU I was able to confirm consistent
> signal delivery with a decent context over multiple runs and after
> hacking echo "32" > /proc/sys/abi/sve_default_vector_length to change
> the default length the tests would fail (as expected).
> 
> Tested-by: Alex Bennée <alex.bennee@linaro.org>

Good to have confirmation of this, thanks

[...]

Cheers
---Dave
  
Kees Cook Dec. 6, 2017, 7:56 p.m. UTC | #4
On Tue, Oct 31, 2017 at 8:51 AM, Dave Martin <Dave.Martin@arm.com> wrote:
> This patch implements support for saving and restoring the SVE
> registers around signals.
>
> A fixed-size header struct sve_context is always included in the
> signal frame encoding the thread's vector length at the time of
> signal delivery, optionally followed by a variable-layout structure
> encoding the SVE registers.
>
> Because of the need to preserve backwards compatibility, the FPSIMD
> view of the SVE registers is always dumped as a struct
> fpsimd_context in the usual way, in addition to any sve_context.
>
> The SVE vector registers are dumped in full, including bits 127:0
> of each register which alias the corresponding FPSIMD vector
> registers in the hardware.  To avoid any ambiguity about which
> alias to restore during sigreturn, the kernel always restores bits
> 127:0 of each SVE vector register from the fpsimd_context in the
> signal frame (which must be present): userspace needs to take this
> into account if it wants to modify the SVE vector register contents
> on return from a signal.
>
> FPSR and FPCR, which are used by both FPSIMD and SVE, are not
> included in sve_context because they are always present in
> fpsimd_context anyway.
>
> For signal delivery, a new helper
> fpsimd_signal_preserve_current_state() is added to update _both_
> the FPSIMD and SVE views in the task struct, to make it easier to
> populate this information into the signal frame.  Because of the
> redundancy between the two views of the state, only one is updated
> otherwise.
>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Cc: Alex Bennée <alex.bennee@linaro.org>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>
>
> ---
>
> **Dropped** Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> (Non-trivial changes.)
>
> Changes since v4
> ----------------
>
> Requested by Will Deacon:
>
>  * Fix inconsistent return semantics in restore_sve_fpsimd_context().
>
>    Currently a nonzero return value from __copy_from_user() is passed
>    back as-is to the caller or restore_sve_fpsimd_context(), rather than
>    translating it do an error code as is done elsewhere.
>
>    Callers of restore_sve_fpsimd_context() only care whether the return
>    value is 0 or not, so this isn't a big deal, but it is more
>    consistent to return proper error codes on failure in case we grow a
>    use for them later.
>
>    This patch returns -EFAULT in the __copy_from_user() error cases
>    that weren't explicitly handled.
>
> Miscellaneous:
>
>  * Change inconsistent copy_to_user() calls to __copy_to_user() in
>    preserve_sve_context().
>
>    There are already __put_user_error() calls here.
>
>    The whole extended signal frame is already checked for
>    access_ok(VERIFY_WRITE) in get_sigframe().

Verifying all these __copy_to/from_user() calls is rather non-trivial.
For example, I had to understand that the access_ok() check actually
spans memory that both user->sigframe and user->next_frame point into.
And it isn't clear to me that all users of apply_user_offset() are
within this range too, along with other manually calculated offsets in
setup_sigframe().

And it's not clear if parse_user_sigframe() is safe either. Are
user->fpsimd and user->sve checked somewhere? It seems like it's
safely contained by in sf->uc.uc_mcontext.__reserved, but it's hard to
read, though I do see access_ok() checks against __reserved at the end
of the while loop.

Can we just drop all the __... calls for the fully checked
copy_to/from_user() instead?

-Kees
  
Will Deacon Dec. 7, 2017, 10:49 a.m. UTC | #5
Hi Kees,

On Wed, Dec 06, 2017 at 11:56:50AM -0800, Kees Cook wrote:
> On Tue, Oct 31, 2017 at 8:51 AM, Dave Martin <Dave.Martin@arm.com> wrote:
> > Miscellaneous:
> >
> >  * Change inconsistent copy_to_user() calls to __copy_to_user() in
> >    preserve_sve_context().
> >
> >    There are already __put_user_error() calls here.
> >
> >    The whole extended signal frame is already checked for
> >    access_ok(VERIFY_WRITE) in get_sigframe().
> 
> Verifying all these __copy_to/from_user() calls is rather non-trivial.
> For example, I had to understand that the access_ok() check actually
> spans memory that both user->sigframe and user->next_frame point into.

I don't think that's particularly difficult -- you just have to read the
four lines preceding the access_ok.

> And it isn't clear to me that all users of apply_user_offset() are
> within this range too, along with other manually calculated offsets in
> setup_sigframe().

The offsets passed into apply_user_offset are calculated by
setup_sigframe_layout as the stack is allocated, so they're correct by
construction. We could add a size check in apply_user_offset if you like?

> And it's not clear if parse_user_sigframe() is safe either. Are
> user->fpsimd and user->sve checked somewhere? It seems like it's
> safely contained by in sf->uc.uc_mcontext.__reserved, but it's hard to
> read, though I do see access_ok() checks against __reserved at the end
> of the while loop.

This one is certainly more difficult to follow, mainly because it's spread
about a bit and we have to check the extra context separately. However, the
main part of the frame is checked in sys_rt_sigreturn before calling
restore_sigframe, and the extra context is checked in parse_user_sigframe
if we find it.

Dave, any thoughts on making this easier to understand?

Will
  
Dave Martin Dec. 7, 2017, 12:03 p.m. UTC | #6
On Thu, Dec 07, 2017 at 10:49:48AM +0000, Will Deacon wrote:
> Hi Kees,
> 
> On Wed, Dec 06, 2017 at 11:56:50AM -0800, Kees Cook wrote:
> > On Tue, Oct 31, 2017 at 8:51 AM, Dave Martin <Dave.Martin@arm.com> wrote:
> > > Miscellaneous:
> > >
> > >  * Change inconsistent copy_to_user() calls to __copy_to_user() in
> > >    preserve_sve_context().
> > >
> > >    There are already __put_user_error() calls here.
> > >
> > >    The whole extended signal frame is already checked for
> > >    access_ok(VERIFY_WRITE) in get_sigframe().
> > 
> > Verifying all these __copy_to/from_user() calls is rather non-trivial.
> > For example, I had to understand that the access_ok() check actually
> > spans memory that both user->sigframe and user->next_frame point into.
> 
> I don't think that's particularly difficult -- you just have to read the
> four lines preceding the access_ok.
> 
> > And it isn't clear to me that all users of apply_user_offset() are
> > within this range too, along with other manually calculated offsets in
> > setup_sigframe().
> 
> The offsets passed into apply_user_offset are calculated by
> setup_sigframe_layout as the stack is allocated, so they're correct by
> construction. We could add a size check in apply_user_offset if you like?

Adding a BUG_ON(out of bounds) in apply_user_offset doesn't seem a
terrible idea.

> > And it's not clear if parse_user_sigframe() is safe either. Are
> > user->fpsimd and user->sve checked somewhere? It seems like it's
> > safely contained by in sf->uc.uc_mcontext.__reserved, but it's hard to
> > read, though I do see access_ok() checks against __reserved at the end
> > of the while loop.
> 
> This one is certainly more difficult to follow, mainly because it's spread
> about a bit and we have to check the extra context separately. However, the
> main part of the frame is checked in sys_rt_sigreturn before calling
> restore_sigframe, and the extra context is checked in parse_user_sigframe
> if we find it.
> 
> Dave, any thoughts on making this easier to understand?

I'm open to ideas myself -- I did screw this up previously with the
missing access_ok() check on the extra_context data area -- though
that wasn't catastrophic since that area is enforced to be contiguous
with the base frame which was always access_ok() checked.


During development, many essential invariants were "documented" using
BUG_ON()s.  Unfortunately we don't really distinguish between marking
invariants that should be derivable from each other and from the code,
and marking things that the developer merely hopes are true (or would
rather not think about at all).  Comprehensive annotation also
burdens the code with a lot of clutter...


It would be good if there were type annotations for pointers that have
passed through the access_ok() check that could be analysed by tools,
something like:

	void __user __user_write_ok(base_offset, size) *p;

Such type annotations could be derived via an access_ok() check, and
taken into account by checkers examining calls to __put_user() etc.:
ultimately __put_user() might be forbidden on types lacking an
annotation with sufficient bounds.

The devil is in the detail though, and to be most useful the
annotations would need to be bindable to runtime values, not
just constants.  This does not preclude static analysis, but it's
far from trivial.  Has anything like this been considered in the past?


For the sigframe code, here's my rationale -- I'm happy to base
comments on it, but the more rationale is documented in comments
the higher the risk that the code will drift away from it over
time without anybody noticing...

Would you pick anything out of this as particularly critical?


The basic flow is:

Signal delivery
---------------
 1. The location and size of each signal frame block is calculated
    in terms of offsets from the base of the frame.

    (Done by setup_sigframe_layout().)

 2. The base (user) address of the frame is calculated by subtracting
    the overall size of the computed frame from the initial user sp.

 3. access_ok() is done on the resulting address range.

    (Steps 2-3 are done by get_sigframe().)

 4. The signal frame is poked using addresses derived in two ways:

  a) Direct derivation from the user sp, within the bounds of the
     access_ok() check in get_sigframe().

  b) Addition to the sigframe base address, of offsets computed in (1);
     these fall with the access_ok() range by construction due to the
     way the access_ok() range is computed from those offsets in the
     first place.

     (b) is done by apply_user_offset(), with no further checks.


Signal return
-------------
 1. The base signal frame is access_ok()'d.

    (sys_rt_sigreturn())

 2. Contents of the base frame is read out and processed using the
    same base address and within the range that was access_ok()'d.

    (restore_sigframe())

 2. parse_user_sigframe() walks over its contents, picking out __user
    pointers to the records found.  The parse is bounded to within the
    access_ok()'d size by the limit variable in parse_user_sigframe().

    Only __user * pointers whose referent's full bounds fit within the
    limits are picked out.
   
 3. limit is updated only for extra_context, and only one extra_context
    is allowed (policed by the have_extra_context bool).

    The extension area that the extra_context block points to is
    required to be contiguous with the end of the (already access_ok()'d)
    sigframe.  The additional space is access_ok()'d and limit updated
    accordingly.

    Parsing then proceeds, still bounded by limit.

 4. The signal frame contents are read out and processed using pointers
    derived by parse_user_sigframe().


Cheers
---Dave
  
Kees Cook Dec. 7, 2017, 6:50 p.m. UTC | #7
On Thu, Dec 7, 2017 at 2:49 AM, Will Deacon <will.deacon@arm.com> wrote:
> Hi Kees,
>
> On Wed, Dec 06, 2017 at 11:56:50AM -0800, Kees Cook wrote:
>> On Tue, Oct 31, 2017 at 8:51 AM, Dave Martin <Dave.Martin@arm.com> wrote:
>> > Miscellaneous:
>> >
>> >  * Change inconsistent copy_to_user() calls to __copy_to_user() in
>> >    preserve_sve_context().
>> >
>> >    There are already __put_user_error() calls here.
>> >
>> >    The whole extended signal frame is already checked for
>> >    access_ok(VERIFY_WRITE) in get_sigframe().
>>
>> Verifying all these __copy_to/from_user() calls is rather non-trivial.
>> For example, I had to understand that the access_ok() check actually
>> spans memory that both user->sigframe and user->next_frame point into.
>
> I don't think that's particularly difficult -- you just have to read the
> four lines preceding the access_ok.

Sure, but someone working backward from finding the __copy_*, it's
less obvious. :)

>> And it isn't clear to me that all users of apply_user_offset() are
>> within this range too, along with other manually calculated offsets in
>> setup_sigframe().
>
> The offsets passed into apply_user_offset are calculated by
> setup_sigframe_layout as the stack is allocated, so they're correct by
> construction. We could add a size check in apply_user_offset if you like?
>
>> And it's not clear if parse_user_sigframe() is safe either. Are
>> user->fpsimd and user->sve checked somewhere? It seems like it's
>> safely contained by in sf->uc.uc_mcontext.__reserved, but it's hard to
>> read, though I do see access_ok() checks against __reserved at the end
>> of the while loop.
>
> This one is certainly more difficult to follow, mainly because it's spread
> about a bit and we have to check the extra context separately. However, the
> main part of the frame is checked in sys_rt_sigreturn before calling
> restore_sigframe, and the extra context is checked in parse_user_sigframe
> if we find it.
>
> Dave, any thoughts on making this easier to understand?

My question is mainly: why not just use copy_*() everywhere instead?
Having these things so spread out makes it fragile, and there's very
little performance benefit from using __copy_*() over copy_*().

As far as I can see, everything looks correctly checked here, but it
took a while to convince myself of it. Having Dave's description in
the other reply certainly helped, though, thanks for that! :)

-Kees
  
Will Deacon Dec. 11, 2017, 2:07 p.m. UTC | #8
On Thu, Dec 07, 2017 at 10:50:38AM -0800, Kees Cook wrote:
> My question is mainly: why not just use copy_*() everywhere instead?
> Having these things so spread out makes it fragile, and there's very
> little performance benefit from using __copy_*() over copy_*().

I think that's more of a general question. Why not just remove the __
versions from the kernel entirely if they're not worth the perf?

Will
  
Kees Cook Dec. 11, 2017, 7:23 p.m. UTC | #9
On Mon, Dec 11, 2017 at 6:07 AM, Will Deacon <will.deacon@arm.com> wrote:
> On Thu, Dec 07, 2017 at 10:50:38AM -0800, Kees Cook wrote:
>> My question is mainly: why not just use copy_*() everywhere instead?
>> Having these things so spread out makes it fragile, and there's very
>> little performance benefit from using __copy_*() over copy_*().
>
> I think that's more of a general question. Why not just remove the __
> versions from the kernel entirely if they're not worth the perf?

That has been something Linus has strongly suggested in the past, so
I've kind of been looking for easy places to drop the __copy_*
versions. :)

-Kees
  
Will Deacon Dec. 12, 2017, 10:40 a.m. UTC | #10
On Mon, Dec 11, 2017 at 11:23:09AM -0800, Kees Cook wrote:
> On Mon, Dec 11, 2017 at 6:07 AM, Will Deacon <will.deacon@arm.com> wrote:
> > On Thu, Dec 07, 2017 at 10:50:38AM -0800, Kees Cook wrote:
> >> My question is mainly: why not just use copy_*() everywhere instead?
> >> Having these things so spread out makes it fragile, and there's very
> >> little performance benefit from using __copy_*() over copy_*().
> >
> > I think that's more of a general question. Why not just remove the __
> > versions from the kernel entirely if they're not worth the perf?
> 
> That has been something Linus has strongly suggested in the past, so
> I've kind of been looking for easy places to drop the __copy_*
> versions. :)

Tell you what then: I'll Ack the arm64 patch if it's part of a series
removing the thing entirely :p

I guess we'd still want to the validation of the whole sigframe though,
so we don't end up pushing half a signal stack before running into an
access_ok failure?

Will
  
Dave Martin Dec. 12, 2017, 11:11 a.m. UTC | #11
On Tue, Dec 12, 2017 at 10:40:30AM +0000, Will Deacon wrote:
> On Mon, Dec 11, 2017 at 11:23:09AM -0800, Kees Cook wrote:
> > On Mon, Dec 11, 2017 at 6:07 AM, Will Deacon <will.deacon@arm.com> wrote:
> > > On Thu, Dec 07, 2017 at 10:50:38AM -0800, Kees Cook wrote:
> > >> My question is mainly: why not just use copy_*() everywhere instead?
> > >> Having these things so spread out makes it fragile, and there's very
> > >> little performance benefit from using __copy_*() over copy_*().
> > >
> > > I think that's more of a general question. Why not just remove the __
> > > versions from the kernel entirely if they're not worth the perf?
> > 
> > That has been something Linus has strongly suggested in the past, so
> > I've kind of been looking for easy places to drop the __copy_*
> > versions. :)
> 
> Tell you what then: I'll Ack the arm64 patch if it's part of a series
> removing the thing entirely :p
> 
> I guess we'd still want to the validation of the whole sigframe though,
> so we don't end up pushing half a signal stack before running into an
> access_ok failure?

That's an interesting question.  In many cases access_ok() might become
redundant, but for syscalls that you don't want to have side-effects
on user memory on failure it's still relevant.

In the signal case we'd still an encompassing access_ok() to prevent
stack guard overruns, because the signal frame can be large and isn't
written or read contiguously or in a well-defined order.

Cheers
---Dave
  
Kees Cook Dec. 12, 2017, 7:36 p.m. UTC | #12
On Tue, Dec 12, 2017 at 3:11 AM, Dave Martin <Dave.Martin@arm.com> wrote:
> On Tue, Dec 12, 2017 at 10:40:30AM +0000, Will Deacon wrote:
>> On Mon, Dec 11, 2017 at 11:23:09AM -0800, Kees Cook wrote:
>> > On Mon, Dec 11, 2017 at 6:07 AM, Will Deacon <will.deacon@arm.com> wrote:
>> > > On Thu, Dec 07, 2017 at 10:50:38AM -0800, Kees Cook wrote:
>> > >> My question is mainly: why not just use copy_*() everywhere instead?
>> > >> Having these things so spread out makes it fragile, and there's very
>> > >> little performance benefit from using __copy_*() over copy_*().
>> > >
>> > > I think that's more of a general question. Why not just remove the __
>> > > versions from the kernel entirely if they're not worth the perf?
>> >
>> > That has been something Linus has strongly suggested in the past, so
>> > I've kind of been looking for easy places to drop the __copy_*
>> > versions. :)
>>
>> Tell you what then: I'll Ack the arm64 patch if it's part of a series
>> removing the thing entirely :p
>>
>> I guess we'd still want to the validation of the whole sigframe though,
>> so we don't end up pushing half a signal stack before running into an
>> access_ok failure?
>
> That's an interesting question.  In many cases access_ok() might become
> redundant, but for syscalls that you don't want to have side-effects
> on user memory on failure it's still relevant.
>
> In the signal case we'd still an encompassing access_ok() to prevent
> stack guard overruns, because the signal frame can be large and isn't
> written or read contiguously or in a well-defined order.

Yeah, I think bailing early is fine. I think the existing access_ok()
checks are fine; I wouldn't want to drop those.

-Kees
  

Patch

diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index 5655fe1..9bbd74c 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -63,6 +63,7 @@  extern void fpsimd_load_state(struct fpsimd_state *state);
 extern void fpsimd_thread_switch(struct task_struct *next);
 extern void fpsimd_flush_thread(void);
 
+extern void fpsimd_signal_preserve_current_state(void);
 extern void fpsimd_preserve_current_state(void);
 extern void fpsimd_restore_current_state(void);
 extern void fpsimd_update_current_state(struct fpsimd_state *state);
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
index c91be8e..e0b5ef5 100644
--- a/arch/arm64/kernel/fpsimd.c
+++ b/arch/arm64/kernel/fpsimd.c
@@ -299,6 +299,32 @@  static void fpsimd_to_sve(struct task_struct *task)
 		       sizeof(fst->vregs[i]));
 }
 
+/*
+ * Transfer the SVE state in task->thread.sve_state to
+ * task->thread.fpsimd_state.
+ *
+ * Task can be a non-runnable task, or current.  In the latter case,
+ * softirqs (and preemption) must be disabled.
+ * task->thread.sve_state must point to at least sve_state_size(task)
+ * bytes of allocated kernel memory.
+ * task->thread.sve_state must be up to date before calling this function.
+ */
+static void sve_to_fpsimd(struct task_struct *task)
+{
+	unsigned int vq;
+	void const *sst = task->thread.sve_state;
+	struct fpsimd_state *fst = &task->thread.fpsimd_state;
+	unsigned int i;
+
+	if (!system_supports_sve())
+		return;
+
+	vq = sve_vq_from_vl(task->thread.sve_vl);
+	for (i = 0; i < 32; ++i)
+		memcpy(&fst->vregs[i], ZREG(sst, vq, i),
+		       sizeof(fst->vregs[i]));
+}
+
 #ifdef CONFIG_ARM64_SVE
 
 /*
@@ -500,9 +526,6 @@  void fpsimd_flush_thread(void)
 /*
  * Save the userland FPSIMD state of 'current' to memory, but only if the state
  * currently held in the registers does in fact belong to 'current'
- *
- * Currently, SVE tasks can't exist, so just WARN in that case.
- * Subsequent patches will add full SVE support here.
  */
 void fpsimd_preserve_current_state(void)
 {
@@ -510,16 +533,23 @@  void fpsimd_preserve_current_state(void)
 		return;
 
 	local_bh_disable();
-
-	if (!test_thread_flag(TIF_FOREIGN_FPSTATE))
-		fpsimd_save_state(&current->thread.fpsimd_state);
-
-	WARN_ON_ONCE(test_and_clear_thread_flag(TIF_SVE));
-
+	task_fpsimd_save();
 	local_bh_enable();
 }
 
 /*
+ * Like fpsimd_preserve_current_state(), but ensure that
+ * current->thread.fpsimd_state is updated so that it can be copied to
+ * the signal frame.
+ */
+void fpsimd_signal_preserve_current_state(void)
+{
+	fpsimd_preserve_current_state();
+	if (system_supports_sve() && test_thread_flag(TIF_SVE))
+		sve_to_fpsimd(current);
+}
+
+/*
  * Load the userland FPSIMD state of 'current' from memory, but only if the
  * FPSIMD state already held in the registers is /not/ the most recent FPSIMD
  * state of 'current'
@@ -554,7 +584,12 @@  void fpsimd_update_current_state(struct fpsimd_state *state)
 
 	local_bh_disable();
 
-	fpsimd_load_state(state);
+	if (system_supports_sve() && test_thread_flag(TIF_SVE)) {
+		current->thread.fpsimd_state = *state;
+		fpsimd_to_sve(current);
+	}
+	task_fpsimd_load();
+
 	if (test_and_clear_thread_flag(TIF_FOREIGN_FPSTATE)) {
 		struct fpsimd_state *st = &current->thread.fpsimd_state;
 
diff --git a/arch/arm64/kernel/signal.c b/arch/arm64/kernel/signal.c
index 4716729..64abf88 100644
--- a/arch/arm64/kernel/signal.c
+++ b/arch/arm64/kernel/signal.c
@@ -63,6 +63,7 @@  struct rt_sigframe_user_layout {
 
 	unsigned long fpsimd_offset;
 	unsigned long esr_offset;
+	unsigned long sve_offset;
 	unsigned long extra_offset;
 	unsigned long end_offset;
 };
@@ -179,9 +180,6 @@  static int preserve_fpsimd_context(struct fpsimd_context __user *ctx)
 	struct fpsimd_state *fpsimd = &current->thread.fpsimd_state;
 	int err;
 
-	/* dump the hardware registers to the fpsimd_state structure */
-	fpsimd_preserve_current_state();
-
 	/* copy the FP and status/control registers */
 	err = __copy_to_user(ctx->vregs, fpsimd->vregs, sizeof(fpsimd->vregs));
 	__put_user_error(fpsimd->fpsr, &ctx->fpsr, err);
@@ -214,6 +212,8 @@  static int restore_fpsimd_context(struct fpsimd_context __user *ctx)
 	__get_user_error(fpsimd.fpsr, &ctx->fpsr, err);
 	__get_user_error(fpsimd.fpcr, &ctx->fpcr, err);
 
+	clear_thread_flag(TIF_SVE);
+
 	/* load the hardware registers from the fpsimd_state structure */
 	if (!err)
 		fpsimd_update_current_state(&fpsimd);
@@ -221,10 +221,118 @@  static int restore_fpsimd_context(struct fpsimd_context __user *ctx)
 	return err ? -EFAULT : 0;
 }
 
+
 struct user_ctxs {
 	struct fpsimd_context __user *fpsimd;
+	struct sve_context __user *sve;
 };
 
+#ifdef CONFIG_ARM64_SVE
+
+static int preserve_sve_context(struct sve_context __user *ctx)
+{
+	int err = 0;
+	u16 reserved[ARRAY_SIZE(ctx->__reserved)];
+	unsigned int vl = current->thread.sve_vl;
+	unsigned int vq = 0;
+
+	if (test_thread_flag(TIF_SVE))
+		vq = sve_vq_from_vl(vl);
+
+	memset(reserved, 0, sizeof(reserved));
+
+	__put_user_error(SVE_MAGIC, &ctx->head.magic, err);
+	__put_user_error(round_up(SVE_SIG_CONTEXT_SIZE(vq), 16),
+			 &ctx->head.size, err);
+	__put_user_error(vl, &ctx->vl, err);
+	BUILD_BUG_ON(sizeof(ctx->__reserved) != sizeof(reserved));
+	err |= __copy_to_user(&ctx->__reserved, reserved, sizeof(reserved));
+
+	if (vq) {
+		/*
+		 * This assumes that the SVE state has already been saved to
+		 * the task struct by calling preserve_fpsimd_context().
+		 */
+		err |= __copy_to_user((char __user *)ctx + SVE_SIG_REGS_OFFSET,
+				      current->thread.sve_state,
+				      SVE_SIG_REGS_SIZE(vq));
+	}
+
+	return err ? -EFAULT : 0;
+}
+
+static int restore_sve_fpsimd_context(struct user_ctxs *user)
+{
+	int err;
+	unsigned int vq;
+	struct fpsimd_state fpsimd;
+	struct sve_context sve;
+
+	if (__copy_from_user(&sve, user->sve, sizeof(sve)))
+		return -EFAULT;
+
+	if (sve.vl != current->thread.sve_vl)
+		return -EINVAL;
+
+	if (sve.head.size <= sizeof(*user->sve)) {
+		clear_thread_flag(TIF_SVE);
+		goto fpsimd_only;
+	}
+
+	vq = sve_vq_from_vl(sve.vl);
+
+	if (sve.head.size < SVE_SIG_CONTEXT_SIZE(vq))
+		return -EINVAL;
+
+	/*
+	 * Careful: we are about __copy_from_user() directly into
+	 * thread.sve_state with preemption enabled, so protection is
+	 * needed to prevent a racing context switch from writing stale
+	 * registers back over the new data.
+	 */
+
+	fpsimd_flush_task_state(current);
+	barrier();
+	/* From now, fpsimd_thread_switch() won't clear TIF_FOREIGN_FPSTATE */
+
+	set_thread_flag(TIF_FOREIGN_FPSTATE);
+	barrier();
+	/* From now, fpsimd_thread_switch() won't touch thread.sve_state */
+
+	sve_alloc(current);
+	err = __copy_from_user(current->thread.sve_state,
+			       (char __user const *)user->sve +
+					SVE_SIG_REGS_OFFSET,
+			       SVE_SIG_REGS_SIZE(vq));
+	if (err)
+		return -EFAULT;
+
+	set_thread_flag(TIF_SVE);
+
+fpsimd_only:
+	/* copy the FP and status/control registers */
+	/* restore_sigframe() already checked that user->fpsimd != NULL. */
+	err = __copy_from_user(fpsimd.vregs, user->fpsimd->vregs,
+			       sizeof(fpsimd.vregs));
+	__get_user_error(fpsimd.fpsr, &user->fpsimd->fpsr, err);
+	__get_user_error(fpsimd.fpcr, &user->fpsimd->fpcr, err);
+
+	/* load the hardware registers from the fpsimd_state structure */
+	if (!err)
+		fpsimd_update_current_state(&fpsimd);
+
+	return err ? -EFAULT : 0;
+}
+
+#else /* ! CONFIG_ARM64_SVE */
+
+/* Turn any non-optimised out attempts to use these into a link error: */
+extern int preserve_sve_context(void __user *ctx);
+extern int restore_sve_fpsimd_context(struct user_ctxs *user);
+
+#endif /* ! CONFIG_ARM64_SVE */
+
+
 static int parse_user_sigframe(struct user_ctxs *user,
 			       struct rt_sigframe __user *sf)
 {
@@ -237,6 +345,7 @@  static int parse_user_sigframe(struct user_ctxs *user,
 	char const __user *const sfp = (char const __user *)sf;
 
 	user->fpsimd = NULL;
+	user->sve = NULL;
 
 	if (!IS_ALIGNED((unsigned long)base, 16))
 		goto invalid;
@@ -287,6 +396,19 @@  static int parse_user_sigframe(struct user_ctxs *user,
 			/* ignore */
 			break;
 
+		case SVE_MAGIC:
+			if (!system_supports_sve())
+				goto invalid;
+
+			if (user->sve)
+				goto invalid;
+
+			if (size < sizeof(*user->sve))
+				goto invalid;
+
+			user->sve = (struct sve_context __user *)head;
+			break;
+
 		case EXTRA_MAGIC:
 			if (have_extra_context)
 				goto invalid;
@@ -363,9 +485,6 @@  static int parse_user_sigframe(struct user_ctxs *user,
 	}
 
 done:
-	if (!user->fpsimd)
-		goto invalid;
-
 	return 0;
 
 invalid:
@@ -399,8 +518,19 @@  static int restore_sigframe(struct pt_regs *regs,
 	if (err == 0)
 		err = parse_user_sigframe(&user, sf);
 
-	if (err == 0)
-		err = restore_fpsimd_context(user.fpsimd);
+	if (err == 0) {
+		if (!user.fpsimd)
+			return -EINVAL;
+
+		if (user.sve) {
+			if (!system_supports_sve())
+				return -EINVAL;
+
+			err = restore_sve_fpsimd_context(&user);
+		} else {
+			err = restore_fpsimd_context(user.fpsimd);
+		}
+	}
 
 	return err;
 }
@@ -459,6 +589,18 @@  static int setup_sigframe_layout(struct rt_sigframe_user_layout *user)
 			return err;
 	}
 
+	if (system_supports_sve()) {
+		unsigned int vq = 0;
+
+		if (test_thread_flag(TIF_SVE))
+			vq = sve_vq_from_vl(current->thread.sve_vl);
+
+		err = sigframe_alloc(user, &user->sve_offset,
+				     SVE_SIG_CONTEXT_SIZE(vq));
+		if (err)
+			return err;
+	}
+
 	return sigframe_alloc_end(user);
 }
 
@@ -500,6 +642,13 @@  static int setup_sigframe(struct rt_sigframe_user_layout *user,
 		__put_user_error(current->thread.fault_code, &esr_ctx->esr, err);
 	}
 
+	/* Scalable Vector Extension state, if present */
+	if (system_supports_sve() && err == 0 && user->sve_offset) {
+		struct sve_context __user *sve_ctx =
+			apply_user_offset(user, user->sve_offset);
+		err |= preserve_sve_context(sve_ctx);
+	}
+
 	if (err == 0 && user->extra_offset) {
 		char __user *sfp = (char __user *)user->sigframe;
 		char __user *userp =
@@ -599,6 +748,8 @@  static int setup_rt_frame(int usig, struct ksignal *ksig, sigset_t *set,
 	struct rt_sigframe __user *frame;
 	int err = 0;
 
+	fpsimd_signal_preserve_current_state();
+
 	if (get_sigframe(&user, ksig, regs))
 		return 1;
 
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index e09bf5d..22711ee 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -239,7 +239,7 @@  static int compat_preserve_vfp_context(struct compat_vfp_sigframe __user *frame)
 	 * Note that this also saves V16-31, which aren't visible
 	 * in AArch32.
 	 */
-	fpsimd_preserve_current_state();
+	fpsimd_signal_preserve_current_state();
 
 	/* Place structure header on the stack */
 	__put_user_error(magic, &frame->magic, err);