[v2] powerpc: Add missing registers to clobbers list for syscalls [BZ #27623]

Message ID 20210329133557.185178-1-msc@linux.ibm.com
State Committed
Headers
Series [v2] powerpc: Add missing registers to clobbers list for syscalls [BZ #27623] |

Commit Message

Matheus Castanho March 29, 2021, 1:35 p.m. UTC
  Some registers that can be clobbered by the kernel during a syscall are not
listed on the clobbers list in sysdeps/unix/sysv/linux/powerpc/sysdep.h.

For syscalls using sc:
    - XER is zeroed by the kernel on exit

For syscalls using scv:
    - XER is zeroed by the kernel on exit
    - Different from the sc case, most CR fields can be clobbered (according to
      the ELF ABI and the Linux kernel's syscall ABI for powerpc
      (linux/Documentation/powerpc/syscall64-abi.rst)

The same should apply to vsyscalls, which effectively execute a function call
but are not currently adding these registers as clobbers either.

These are likely not causing issues today, but they should be added to the
clobbers list just in case things change on the kernel side in the future.

Reported-by: Nicholas Piggin <npiggin@gmail.com>
---

Changes from v1:
  - Add XER, CR1, CR5-7 to vsyscall clobbers list

---
 sysdeps/unix/sysv/linux/powerpc/sysdep.h | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

--
2.30.2
  

Comments

Nicholas Piggin April 1, 2021, 7:40 a.m. UTC | #1
Excerpts from Matheus Castanho's message of March 29, 2021 11:35 pm:
> Some registers that can be clobbered by the kernel during a syscall are not
> listed on the clobbers list in sysdeps/unix/sysv/linux/powerpc/sysdep.h.
> 
> For syscalls using sc:
>     - XER is zeroed by the kernel on exit
> 
> For syscalls using scv:
>     - XER is zeroed by the kernel on exit
>     - Different from the sc case, most CR fields can be clobbered (according to
>       the ELF ABI and the Linux kernel's syscall ABI for powerpc
>       (linux/Documentation/powerpc/syscall64-abi.rst)
> 
> The same should apply to vsyscalls, which effectively execute a function call
> but are not currently adding these registers as clobbers either.
> 
> These are likely not causing issues today, but they should be added to the
> clobbers list just in case things change on the kernel side in the future.
> 
> Reported-by: Nicholas Piggin <npiggin@gmail.com>

As far as I can tell, this should be matching the kernel now.

Thanks,
Nick

> ---
> 
> Changes from v1:
>   - Add XER, CR1, CR5-7 to vsyscall clobbers list
> 
> ---
>  sysdeps/unix/sysv/linux/powerpc/sysdep.h | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/sysdeps/unix/sysv/linux/powerpc/sysdep.h b/sysdeps/unix/sysv/linux/powerpc/sysdep.h
> index 6b99464e61..2f31f9177b 100644
> --- a/sysdeps/unix/sysv/linux/powerpc/sysdep.h
> +++ b/sysdeps/unix/sysv/linux/powerpc/sysdep.h
> @@ -56,7 +56,9 @@
>         "0:"								\
>         : "+r" (r0), "+r" (r3), "+r" (r4), "+r" (r5),  "+r" (r6),        \
>           "+r" (r7), "+r" (r8)						\
> -       : : "r9", "r10", "r11", "r12", "cr0", "ctr", "lr", "memory");	\
> +       : : "r9", "r10", "r11", "r12",					\
> +           "cr0", "cr1", "cr5", "cr6", "cr7",				\
> +           "xer", "lr", "ctr", "memory");				\
>      __asm__ __volatile__ ("" : "=r" (rval) : "r" (r3));		        \
>      (long int) r0 & (1 << 28) ? -rval : rval;				\
>    })
> @@ -86,7 +88,8 @@
>  	 "=&r" (r6), "=&r" (r7), "=&r" (r8)	\
>         : ASM_INPUT_##nr			\
>         : "r9", "r10", "r11", "r12",		\
> -	 "lr", "ctr", "memory");		\
> +	 "cr0", "cr1", "cr5", "cr6", "cr7",	\
> +	 "xer", "lr", "ctr", "memory"); 	\
>      r3;					\
>    })
> 
> @@ -101,7 +104,7 @@
>  	 "=&r" (r6), "=&r" (r7), "=&r" (r8)	\
>         : ASM_INPUT_##nr			\
>         : "r9", "r10", "r11", "r12",		\
> -	 "cr0", "ctr", "memory");		\
> +	 "xer", "cr0", "ctr", "memory");	\
>      r0 & (1 << 28) ? -r3 : r3;			\
>    })
> 
> --
> 2.30.2
>
  
Raphael M Zinsly April 9, 2021, 1:07 p.m. UTC | #2
The patch LGTM!

On 29/03/2021 10:35, Matheus Castanho via Libc-alpha wrote:
> Some registers that can be clobbered by the kernel during a syscall are not
> listed on the clobbers list in sysdeps/unix/sysv/linux/powerpc/sysdep.h.
> 
> For syscalls using sc:
>      - XER is zeroed by the kernel on exit
> 
> For syscalls using scv:
>      - XER is zeroed by the kernel on exit
>      - Different from the sc case, most CR fields can be clobbered (according to
>        the ELF ABI and the Linux kernel's syscall ABI for powerpc
>        (linux/Documentation/powerpc/syscall64-abi.rst)
> 
> The same should apply to vsyscalls, which effectively execute a function call
> but are not currently adding these registers as clobbers either.
> 
> These are likely not causing issues today, but they should be added to the
> clobbers list just in case things change on the kernel side in the future.
> 
> Reported-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> 
> Changes from v1:
>    - Add XER, CR1, CR5-7 to vsyscall clobbers list
> 
> ---
>   sysdeps/unix/sysv/linux/powerpc/sysdep.h | 9 ++++++---
>   1 file changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/sysdeps/unix/sysv/linux/powerpc/sysdep.h b/sysdeps/unix/sysv/linux/powerpc/sysdep.h
> index 6b99464e61..2f31f9177b 100644
> --- a/sysdeps/unix/sysv/linux/powerpc/sysdep.h
> +++ b/sysdeps/unix/sysv/linux/powerpc/sysdep.h
> @@ -56,7 +56,9 @@
>          "0:"								\
>          : "+r" (r0), "+r" (r3), "+r" (r4), "+r" (r5),  "+r" (r6),        \
>            "+r" (r7), "+r" (r8)						\
> -       : : "r9", "r10", "r11", "r12", "cr0", "ctr", "lr", "memory");	\
> +       : : "r9", "r10", "r11", "r12",					\
> +           "cr0", "cr1", "cr5", "cr6", "cr7",				\
> +           "xer", "lr", "ctr", "memory");				\
>       __asm__ __volatile__ ("" : "=r" (rval) : "r" (r3));		        \
>       (long int) r0 & (1 << 28) ? -rval : rval;				\
>     })
> @@ -86,7 +88,8 @@
>   	 "=&r" (r6), "=&r" (r7), "=&r" (r8)	\
>          : ASM_INPUT_##nr			\
>          : "r9", "r10", "r11", "r12",		\
> -	 "lr", "ctr", "memory");		\
> +	 "cr0", "cr1", "cr5", "cr6", "cr7",	\
> +	 "xer", "lr", "ctr", "memory"); 	\
>       r3;					\
>     })
> 
> @@ -101,7 +104,7 @@
>   	 "=&r" (r6), "=&r" (r7), "=&r" (r8)	\
>          : ASM_INPUT_##nr			\
>          : "r9", "r10", "r11", "r12",		\
> -	 "cr0", "ctr", "memory");		\
> +	 "xer", "cr0", "ctr", "memory");	\
>       r0 & (1 << 28) ? -r3 : r3;			\
>     })
> 
> --
> 2.30.2
> 

Best Regards,
  
Matheus Castanho April 16, 2021, 11:44 a.m. UTC | #3
Pushed as 5d61fc2021922b4f572be218dad5b299e2939346

--
Matheus Castanho
  

Patch

diff --git a/sysdeps/unix/sysv/linux/powerpc/sysdep.h b/sysdeps/unix/sysv/linux/powerpc/sysdep.h
index 6b99464e61..2f31f9177b 100644
--- a/sysdeps/unix/sysv/linux/powerpc/sysdep.h
+++ b/sysdeps/unix/sysv/linux/powerpc/sysdep.h
@@ -56,7 +56,9 @@ 
        "0:"								\
        : "+r" (r0), "+r" (r3), "+r" (r4), "+r" (r5),  "+r" (r6),        \
          "+r" (r7), "+r" (r8)						\
-       : : "r9", "r10", "r11", "r12", "cr0", "ctr", "lr", "memory");	\
+       : : "r9", "r10", "r11", "r12",					\
+           "cr0", "cr1", "cr5", "cr6", "cr7",				\
+           "xer", "lr", "ctr", "memory");				\
     __asm__ __volatile__ ("" : "=r" (rval) : "r" (r3));		        \
     (long int) r0 & (1 << 28) ? -rval : rval;				\
   })
@@ -86,7 +88,8 @@ 
 	 "=&r" (r6), "=&r" (r7), "=&r" (r8)	\
        : ASM_INPUT_##nr			\
        : "r9", "r10", "r11", "r12",		\
-	 "lr", "ctr", "memory");		\
+	 "cr0", "cr1", "cr5", "cr6", "cr7",	\
+	 "xer", "lr", "ctr", "memory"); 	\
     r3;					\
   })

@@ -101,7 +104,7 @@ 
 	 "=&r" (r6), "=&r" (r7), "=&r" (r8)	\
        : ASM_INPUT_##nr			\
        : "r9", "r10", "r11", "r12",		\
-	 "cr0", "ctr", "memory");		\
+	 "xer", "cr0", "ctr", "memory");	\
     r0 & (1 << 28) ? -r3 : r3;			\
   })