[3/4] powerpc: Runtime selection between sc and scv for syscalls

Message ID 20201118144703.75569-4-msc@linux.ibm.com
State Superseded
Headers
Series powerpc: Add support for system call vectored |

Commit Message

Matheus Castanho Nov. 18, 2020, 2:47 p.m. UTC
  Linux kernel v5.9 added support for system calls using the scv
instruction for POWER9 and later.  The new codepath provides better
performance (see below) if compared to using sc.  For the
foreseeable future, both sc and scv mechanisms will co-exist, so this
patch enables glibc to do a runtime check and always use scv when it is
available.

Before issuing the system call to the kernel, we check hwcap2 in the TCB
for PPC_FEATURE2_SCV to see if scv is supported by the kernel.  If not,
we fallback to sc and keep the old behavior.

The kernel implements a different error return convention for scv, so
when returning from a system call we need to handle the return value
differently depending on the instruction we used to enter the kernel.

For syscalls implemented in ASM, entry and exit are implemented by
different macros (PSEUDO and PSEUDO_RET, resp.), which may be used in
sequence (e.g. for templated syscalls) or with other instructions in
between (e.g. clone).  To avoid accessing the TCB a second time on
PSEUDO_RET to check which instruction we used, the value read from
hwcap2 is cached on a non-volatile register.

This is not needed when using INTERNAL_SYSCALL macro, since entry and
exit are bundled into the same inline asm directive.

Since system calls may be called before the TCB has been setup (e.g.
inside the dynamic loader), we also check the value of the thread
pointer before effectively accessing the TCB.  For such situations in
which the availability of scv cannot be determined, sc is always used.

Support for scv in syscalls implemented in their own ASM file (clone and
vfork) will be added later. For now simply use sc as before.

Average performance over 1M calls for each syscall "type":
  - stat: C wrapper calling INTERNAL_SYSCALL
  - getpid: templated ASM syscall
  - syscall: call to gettid using syscall function

  Standard:
     stat : 1.573445 us / ~3619 cycles
   getpid : 0.164986 us / ~379 cycles
  syscall : 0.162743 us / ~374 cycles

  With scv:
     stat : 1.537049 us / ~3535 cycles <~ -84 cycles  / -2.32%
   getpid : 0.109923 us / ~253 cycles  <~ -126 cycles / -33.25%
  syscall : 0.116410 us / ~268 cycles  <~ -106 cycles / -28.34%

Tested on powerpc, powerpc64, powerpc64le (with and without scv)
---
 sysdeps/powerpc/powerpc32/sysdep.h            | 19 ++--
 sysdeps/powerpc/powerpc64/sysdep.h            | 90 ++++++++++++++++++-
 .../unix/sysv/linux/powerpc/powerpc64/clone.S |  9 +-
 .../unix/sysv/linux/powerpc/powerpc64/vfork.S |  6 +-
 sysdeps/unix/sysv/linux/powerpc/syscall.S     | 11 ++-
 sysdeps/unix/sysv/linux/powerpc/sysdep.h      | 78 +++++++++++-----
 6 files changed, 174 insertions(+), 39 deletions(-)
  

Comments

Florian Weimer Nov. 18, 2020, 3:16 p.m. UTC | #1
* Matheus Castanho via Libc-alpha:

> +/* Check PPC_FEATURE2_SCV bit from hwcap2 in the TCB and update CR0
> + * accordingly.  First, we check if the thread pointer != 0, so we don't try to
> + * access the TCB before it has been initialized, e.g. inside the dynamic
> + * loader.  If it is already initialized, check if scv is available.  On both
> + * negative cases, go to JUMPFALSE (label given by the macro's caller).  We
> + * save the value we read from the TCB in a non-volatile register so we can
> + * reuse it later when exiting from the syscall in PSEUDO_RET.  */

This comment style is not GNU (sorry).

I think you can avoid the conditional check and replace it with #if
IS_IN (rtld).  Then ld.so will use the old interface unconditionally,
but that should be okay.
  
Paul A. Clarke Nov. 18, 2020, 7 p.m. UTC | #2
On Wed, Nov 18, 2020 at 11:47:02AM -0300, Matheus Castanho via Libc-alpha wrote:
> Linux kernel v5.9 added support for system calls using the scv
> instruction for POWER9 and later.  The new codepath provides better
> performance (see below) if compared to using sc.  For the
> foreseeable future, both sc and scv mechanisms will co-exist, so this
> patch enables glibc to do a runtime check and always use scv when it is
> available.

nit: "always" is perhaps too strong here, as there are exceptions, as noted
in your message further below.

> Before issuing the system call to the kernel, we check hwcap2 in the TCB
> for PPC_FEATURE2_SCV to see if scv is supported by the kernel.  If not,
> we fallback to sc and keep the old behavior.
> 
> The kernel implements a different error return convention for scv, so
> when returning from a system call we need to handle the return value
> differently depending on the instruction we used to enter the kernel.
> 
> For syscalls implemented in ASM, entry and exit are implemented by
> different macros (PSEUDO and PSEUDO_RET, resp.), which may be used in
> sequence (e.g. for templated syscalls) or with other instructions in
> between (e.g. clone).  To avoid accessing the TCB a second time on
> PSEUDO_RET to check which instruction we used, the value read from
> hwcap2 is cached on a non-volatile register.
> 
> This is not needed when using INTERNAL_SYSCALL macro, since entry and
> exit are bundled into the same inline asm directive.
> 
> Since system calls may be called before the TCB has been setup (e.g.
> inside the dynamic loader), we also check the value of the thread
> pointer before effectively accessing the TCB.  For such situations in
> which the availability of scv cannot be determined, sc is always used.
> 
> Support for scv in syscalls implemented in their own ASM file (clone and
> vfork) will be added later. For now simply use sc as before.
> 
> Average performance over 1M calls for each syscall "type":
>   - stat: C wrapper calling INTERNAL_SYSCALL
>   - getpid: templated ASM syscall
>   - syscall: call to gettid using syscall function
> 
>   Standard:
>      stat : 1.573445 us / ~3619 cycles
>    getpid : 0.164986 us / ~379 cycles
>   syscall : 0.162743 us / ~374 cycles
> 
>   With scv:
>      stat : 1.537049 us / ~3535 cycles <~ -84 cycles  / -2.32%
>    getpid : 0.109923 us / ~253 cycles  <~ -126 cycles / -33.25%
>   syscall : 0.116410 us / ~268 cycles  <~ -106 cycles / -28.34%
> 
> Tested on powerpc, powerpc64, powerpc64le (with and without scv)
> ---
>  sysdeps/powerpc/powerpc32/sysdep.h            | 19 ++--
>  sysdeps/powerpc/powerpc64/sysdep.h            | 90 ++++++++++++++++++-
>  .../unix/sysv/linux/powerpc/powerpc64/clone.S |  9 +-
>  .../unix/sysv/linux/powerpc/powerpc64/vfork.S |  6 +-
>  sysdeps/unix/sysv/linux/powerpc/syscall.S     | 11 ++-
>  sysdeps/unix/sysv/linux/powerpc/sysdep.h      | 78 +++++++++++-----
>  6 files changed, 174 insertions(+), 39 deletions(-)
> 
> diff --git a/sysdeps/powerpc/powerpc32/sysdep.h b/sysdeps/powerpc/powerpc32/sysdep.h
> index 829eec266a..bff18bdc8b 100644
> --- a/sysdeps/powerpc/powerpc32/sysdep.h
> +++ b/sysdeps/powerpc/powerpc32/sysdep.h
> @@ -90,9 +90,12 @@ GOT_LABEL:			;					      \
>    cfi_endproc;								      \
>    ASM_SIZE_DIRECTIVE(name)
> 
> -#define DO_CALL(syscall)						      \
> -    li 0,syscall;							      \
> -    sc
> +#define DO_CALL(syscall) \
> +	li 0,syscall; \
> +	DO_CALL_SC

nit: there are some innocuous whitespace changes which could be avoided to minimize
the diff (moving the '\' closer and changing from spaces to a tab).

> +
> +#define DO_CALL_SC \
> +	sc
> 
>  #undef JUMPTARGET
>  #ifdef PIC
> @@ -106,14 +109,20 @@ GOT_LABEL:			;					      \
>  # define HIDDEN_JUMPTARGET(name) __GI_##name##@local
>  #endif
> 
> +#define TAIL_CALL_SYSCALL_ERROR \
> +    b __syscall_error@local
> +
>  #define PSEUDO(name, syscall_name, args)				      \
>    .section ".text";							      \
>    ENTRY (name)								      \
>      DO_CALL (SYS_ify (syscall_name));
> 
> +#define RET_SC \
> +    bnslr+;
> +
>  #define PSEUDO_RET							      \
> -    bnslr+;								      \
> -    b __syscall_error@local
> +    RET_SC;								      \
> +    TAIL_CALL_SYSCALL_ERROR
>  #define ret PSEUDO_RET
> 
>  #undef	PSEUDO_END
> diff --git a/sysdeps/powerpc/powerpc64/sysdep.h b/sysdeps/powerpc/powerpc64/sysdep.h
> index d557098898..2d7dde64da 100644
> --- a/sysdeps/powerpc/powerpc64/sysdep.h
> +++ b/sysdeps/powerpc/powerpc64/sysdep.h
> @@ -17,6 +17,7 @@
>     <https://www.gnu.org/licenses/>.  */
> 
>  #include <sysdeps/powerpc/sysdep.h>
> +#include <tls.h>
> 
>  #ifdef __ASSEMBLER__
> 
> @@ -263,10 +264,72 @@ LT_LABELSUFFIX(name,_name_end): ; \
>    TRACEBACK_MASK(name,mask);	\
>    END_2(name)
> 
> +/* We will allocate a new frame to save LR and the non-volatile register used to
> +   read the TCB when checking for scv support on syscall code.  We actually just
> +   need the minimum frame size plus room for 1 reg (64 bits).  But the ABI

nit: Since everything is in bytes below, suggest changing "64 bits" to "8 bytes".

> +   mandates stack frames should be aligned at 16 Bytes, so we end up allocating
> +   a bit more space then what will actually be used.  */
> +#define SCV_FRAME_SIZE (FRAME_MIN_SIZE+16)

8 for the register save area + 8 more to maintain 16-byte alignment.  OK.

> +#define SCV_FRAME_NVOLREG_SAVE FRAME_MIN_SIZE
> +
> +/* Allocate frame and save register */
> +#define NVOLREG_SAVE \
> +    stdu r1,-SCV_FRAME_SIZE(r1); \
> +    std r31,SCV_FRAME_NVOLREG_SAVE(r1); \
> +    cfi_adjust_cfa_offset(SCV_FRAME_SIZE);
> +
> +/* Restore register and destroy frame */
> +#define NVOLREG_RESTORE	\
> +    ld r31,SCV_FRAME_NVOLREG_SAVE(r1); \
> +    addi r1,r1,SCV_FRAME_SIZE; \
> +    cfi_adjust_cfa_offset(-SCV_FRAME_SIZE);
> +
> +/* Check PPC_FEATURE2_SCV bit from hwcap2 in the TCB and update CR0
> + * accordingly.  First, we check if the thread pointer != 0, so we don't try to
> + * access the TCB before it has been initialized, e.g. inside the dynamic
> + * loader.  If it is already initialized, check if scv is available.  On both
> + * negative cases, go to JUMPFALSE (label given by the macro's caller).  We
> + * save the value we read from the TCB in a non-volatile register so we can
> + * reuse it later when exiting from the syscall in PSEUDO_RET.  */

Florian already mentioned removing the leading '*' for proper formatting.

> +    .macro CHECK_SCV_SUPPORT REG JUMPFALSE
> +
> +    /* Check if thread pointer has already been setup */
> +    cmpdi r13,0
> +    beq \JUMPFALSE
> +
> +    /* Read PPC_FEATURE2_SCV from TCB and store it in REG */
> +    ld \REG,TCB_HWCAP(PT_THREAD_POINTER)
> +    andis. \REG,\REG,PPC_FEATURE2_SCV>>16
> +
> +    beq \JUMPFALSE
> +    .endm
> +
> +/* Before doing the syscall, check if we can use scv.  scv is supported by P9
> + * and later with Linux v5.9 and later.  If so, use it.  Otherwise, fallback to
> + * sc.  We use a non-volatile register to save hwcap2 from the TCB, so we need
> + * to save its content beforehand. */

Here, too. Also need one more space after '.'.

>  #define DO_CALL(syscall) \
> -    li 0,syscall; \
> +    li r0,syscall; \
> +    NVOLREG_SAVE; \
> +    CHECK_SCV_SUPPORT r31 0f; \
> +    DO_CALL_SCV; \
> +    b 1f; \
> +0:  DO_CALL_SC; \
> +1:
> +
> +/* DO_CALL_SC and DO_CALL_SCV expect the syscall number to be loaded on r0.  */

nit: s/loaded on/in/

rest looks OK.

With the comments fixed, LGTM. Fixing the nits is up to you.

PC
  
Matheus Castanho Nov. 19, 2020, 8:29 p.m. UTC | #3
On 11/18/20 12:16 PM, Florian Weimer wrote:
> * Matheus Castanho via Libc-alpha:
> 
>> +/* Check PPC_FEATURE2_SCV bit from hwcap2 in the TCB and update CR0
>> + * accordingly.  First, we check if the thread pointer != 0, so we don't try to
>> + * access the TCB before it has been initialized, e.g. inside the dynamic
>> + * loader.  If it is already initialized, check if scv is available.  On both
>> + * negative cases, go to JUMPFALSE (label given by the macro's caller).  We
>> + * save the value we read from the TCB in a non-volatile register so we can
>> + * reuse it later when exiting from the syscall in PSEUDO_RET.  */
> 
> This comment style is not GNU (sorry).

Oops, fixed.

> 
> I think you can avoid the conditional check and replace it with #if
> IS_IN (rtld).  Then ld.so will use the old interface unconditionally,
> but that should be okay.
> 

That should work for shared libc, but in the static case we may also hit the same problem:
trying to access the TLS to read hwcap2 before it has been initialized, but this time in csu/libc-tls.c

Is there a way to also check if we are in the static startup code at compile time? If not, I'm afraid
I'll have to keep the check for the thread pointer.

Thanks,
Matheus Castanho
  
Matheus Castanho Nov. 19, 2020, 8:34 p.m. UTC | #4
On 11/18/20 4:00 PM, Paul A. Clarke wrote:
> On Wed, Nov 18, 2020 at 11:47:02AM -0300, Matheus Castanho via Libc-alpha wrote:
>> Linux kernel v5.9 added support for system calls using the scv
>> instruction for POWER9 and later.  The new codepath provides better
>> performance (see below) if compared to using sc.  For the
>> foreseeable future, both sc and scv mechanisms will co-exist, so this
>> patch enables glibc to do a runtime check and always use scv when it is
>> available.
> 
> nit: "always" is perhaps too strong here, as there are exceptions, as noted
> in your message further below.

Right. I'll change that on v2.

> 
>> Before issuing the system call to the kernel, we check hwcap2 in the TCB
>> for PPC_FEATURE2_SCV to see if scv is supported by the kernel.  If not,
>> we fallback to sc and keep the old behavior.
>>
>> The kernel implements a different error return convention for scv, so
>> when returning from a system call we need to handle the return value
>> differently depending on the instruction we used to enter the kernel.
>>
>> For syscalls implemented in ASM, entry and exit are implemented by
>> different macros (PSEUDO and PSEUDO_RET, resp.), which may be used in
>> sequence (e.g. for templated syscalls) or with other instructions in
>> between (e.g. clone).  To avoid accessing the TCB a second time on
>> PSEUDO_RET to check which instruction we used, the value read from
>> hwcap2 is cached on a non-volatile register.
>>
>> This is not needed when using INTERNAL_SYSCALL macro, since entry and
>> exit are bundled into the same inline asm directive.
>>
>> Since system calls may be called before the TCB has been setup (e.g.
>> inside the dynamic loader), we also check the value of the thread
>> pointer before effectively accessing the TCB.  For such situations in
>> which the availability of scv cannot be determined, sc is always used.
>>
>> Support for scv in syscalls implemented in their own ASM file (clone and
>> vfork) will be added later. For now simply use sc as before.
>>
>> Average performance over 1M calls for each syscall "type":
>>   - stat: C wrapper calling INTERNAL_SYSCALL
>>   - getpid: templated ASM syscall
>>   - syscall: call to gettid using syscall function
>>
>>   Standard:
>>      stat : 1.573445 us / ~3619 cycles
>>    getpid : 0.164986 us / ~379 cycles
>>   syscall : 0.162743 us / ~374 cycles
>>
>>   With scv:
>>      stat : 1.537049 us / ~3535 cycles <~ -84 cycles  / -2.32%
>>    getpid : 0.109923 us / ~253 cycles  <~ -126 cycles / -33.25%
>>   syscall : 0.116410 us / ~268 cycles  <~ -106 cycles / -28.34%
>>
>> Tested on powerpc, powerpc64, powerpc64le (with and without scv)
>> ---
>>  sysdeps/powerpc/powerpc32/sysdep.h            | 19 ++--
>>  sysdeps/powerpc/powerpc64/sysdep.h            | 90 ++++++++++++++++++-
>>  .../unix/sysv/linux/powerpc/powerpc64/clone.S |  9 +-
>>  .../unix/sysv/linux/powerpc/powerpc64/vfork.S |  6 +-
>>  sysdeps/unix/sysv/linux/powerpc/syscall.S     | 11 ++-
>>  sysdeps/unix/sysv/linux/powerpc/sysdep.h      | 78 +++++++++++-----
>>  6 files changed, 174 insertions(+), 39 deletions(-)
>>
>> diff --git a/sysdeps/powerpc/powerpc32/sysdep.h b/sysdeps/powerpc/powerpc32/sysdep.h
>> index 829eec266a..bff18bdc8b 100644
>> --- a/sysdeps/powerpc/powerpc32/sysdep.h
>> +++ b/sysdeps/powerpc/powerpc32/sysdep.h
>> @@ -90,9 +90,12 @@ GOT_LABEL:			;					      \
>>    cfi_endproc;								      \
>>    ASM_SIZE_DIRECTIVE(name)
>>
>> -#define DO_CALL(syscall)						      \
>> -    li 0,syscall;							      \
>> -    sc
>> +#define DO_CALL(syscall) \
>> +	li 0,syscall; \
>> +	DO_CALL_SC
> 
> nit: there are some innocuous whitespace changes which could be avoided to minimize
> the diff (moving the '\' closer and changing from spaces to a tab).
> 

Fixed.

>> +
>> +#define DO_CALL_SC \
>> +	sc
>>
>>  #undef JUMPTARGET
>>  #ifdef PIC
>> @@ -106,14 +109,20 @@ GOT_LABEL:			;					      \
>>  # define HIDDEN_JUMPTARGET(name) __GI_##name##@local
>>  #endif
>>
>> +#define TAIL_CALL_SYSCALL_ERROR \
>> +    b __syscall_error@local
>> +
>>  #define PSEUDO(name, syscall_name, args)				      \
>>    .section ".text";							      \
>>    ENTRY (name)								      \
>>      DO_CALL (SYS_ify (syscall_name));
>>
>> +#define RET_SC \
>> +    bnslr+;
>> +
>>  #define PSEUDO_RET							      \
>> -    bnslr+;								      \
>> -    b __syscall_error@local
>> +    RET_SC;								      \
>> +    TAIL_CALL_SYSCALL_ERROR
>>  #define ret PSEUDO_RET
>>
>>  #undef	PSEUDO_END
>> diff --git a/sysdeps/powerpc/powerpc64/sysdep.h b/sysdeps/powerpc/powerpc64/sysdep.h
>> index d557098898..2d7dde64da 100644
>> --- a/sysdeps/powerpc/powerpc64/sysdep.h
>> +++ b/sysdeps/powerpc/powerpc64/sysdep.h
>> @@ -17,6 +17,7 @@
>>     <https://www.gnu.org/licenses/>.  */
>>
>>  #include <sysdeps/powerpc/sysdep.h>
>> +#include <tls.h>
>>
>>  #ifdef __ASSEMBLER__
>>
>> @@ -263,10 +264,72 @@ LT_LABELSUFFIX(name,_name_end): ; \
>>    TRACEBACK_MASK(name,mask);	\
>>    END_2(name)
>>
>> +/* We will allocate a new frame to save LR and the non-volatile register used to
>> +   read the TCB when checking for scv support on syscall code.  We actually just
>> +   need the minimum frame size plus room for 1 reg (64 bits).  But the ABI
> 
> nit: Since everything is in bytes below, suggest changing "64 bits" to "8 bytes".

Fixed.

> 
>> +   mandates stack frames should be aligned at 16 Bytes, so we end up allocating
>> +   a bit more space then what will actually be used.  */
>> +#define SCV_FRAME_SIZE (FRAME_MIN_SIZE+16)
> 
> 8 for the register save area + 8 more to maintain 16-byte alignment.  OK.
> 
>> +#define SCV_FRAME_NVOLREG_SAVE FRAME_MIN_SIZE
>> +
>> +/* Allocate frame and save register */
>> +#define NVOLREG_SAVE \
>> +    stdu r1,-SCV_FRAME_SIZE(r1); \
>> +    std r31,SCV_FRAME_NVOLREG_SAVE(r1); \
>> +    cfi_adjust_cfa_offset(SCV_FRAME_SIZE);
>> +
>> +/* Restore register and destroy frame */
>> +#define NVOLREG_RESTORE	\
>> +    ld r31,SCV_FRAME_NVOLREG_SAVE(r1); \
>> +    addi r1,r1,SCV_FRAME_SIZE; \
>> +    cfi_adjust_cfa_offset(-SCV_FRAME_SIZE);
>> +
>> +/* Check PPC_FEATURE2_SCV bit from hwcap2 in the TCB and update CR0
>> + * accordingly.  First, we check if the thread pointer != 0, so we don't try to
>> + * access the TCB before it has been initialized, e.g. inside the dynamic
>> + * loader.  If it is already initialized, check if scv is available.  On both
>> + * negative cases, go to JUMPFALSE (label given by the macro's caller).  We
>> + * save the value we read from the TCB in a non-volatile register so we can
>> + * reuse it later when exiting from the syscall in PSEUDO_RET.  */
> 
> Florian already mentioned removing the leading '*' for proper formatting.

Fixed.

> 
>> +    .macro CHECK_SCV_SUPPORT REG JUMPFALSE
>> +
>> +    /* Check if thread pointer has already been setup */
>> +    cmpdi r13,0
>> +    beq \JUMPFALSE
>> +
>> +    /* Read PPC_FEATURE2_SCV from TCB and store it in REG */
>> +    ld \REG,TCB_HWCAP(PT_THREAD_POINTER)
>> +    andis. \REG,\REG,PPC_FEATURE2_SCV>>16
>> +
>> +    beq \JUMPFALSE
>> +    .endm
>> +
>> +/* Before doing the syscall, check if we can use scv.  scv is supported by P9
>> + * and later with Linux v5.9 and later.  If so, use it.  Otherwise, fallback to
>> + * sc.  We use a non-volatile register to save hwcap2 from the TCB, so we need
>> + * to save its content beforehand. */
> 
> Here, too. Also need one more space after '.'.

Fixed and fixed.

> 
>>  #define DO_CALL(syscall) \
>> -    li 0,syscall; \
>> +    li r0,syscall; \
>> +    NVOLREG_SAVE; \
>> +    CHECK_SCV_SUPPORT r31 0f; \
>> +    DO_CALL_SCV; \
>> +    b 1f; \
>> +0:  DO_CALL_SC; \
>> +1:
>> +
>> +/* DO_CALL_SC and DO_CALL_SCV expect the syscall number to be loaded on r0.  */
> 
> nit: s/loaded on/in/

Fixed too.

> 
> rest looks OK.
> 
> With the comments fixed, LGTM. Fixing the nits is up to you.
> 
> PC
> 

Queued changes for v2.

Thanks,
Matheus Castanho
  
Florian Weimer Nov. 19, 2020, 8:35 p.m. UTC | #5
* Matheus Castanho:

> That should work for shared libc, but in the static case we may also
> hit the same problem: trying to access the TLS to read hwcap2 before
> it has been initialized, but this time in csu/libc-tls.c

Ahh.

> Is there a way to also check if we are in the static startup code at
> compile time? If not, I'm afraid I'll have to keep the check for the
> thread pointer.

Is the thread pointer in a regular register?  Then you could install a
fake TCB early on that has a zero bit in the right place.

The other option would be to always use the old interface for !SHARED.
Just saying. 8-)
  
Matheus Castanho Nov. 23, 2020, 6 p.m. UTC | #6
Hi Florian,

On 11/19/20 5:35 PM, Florian Weimer wrote:
> * Matheus Castanho:
> 
>> That should work for shared libc, but in the static case we may also
>> hit the same problem: trying to access the TLS to read hwcap2 before
>> it has been initialized, but this time in csu/libc-tls.c
> 
> Ahh.
> 
>> Is there a way to also check if we are in the static startup code at
>> compile time? If not, I'm afraid I'll have to keep the check for the
>> thread pointer.
> 
> Is the thread pointer in a regular register?  Then you could install a
> fake TCB early on that has a zero bit in the right place.
> 
> The other option would be to always use the old interface for !SHARED.
> Just saying. 8-)
> 

I believe adding the fake TCB is a bit out of the scope of this patch, and I'd
also prefer to keep the same behavior for both static and shared libcs, so we
avoid surprises in the future.

Do you see this as a blocker for merging this patch?

Thanks,
Matheus Castanho
  
Matheus Castanho Dec. 1, 2020, 12:50 p.m. UTC | #7
Gentle ping.

On 11/23/20 3:00 PM, Matheus Castanho via Libc-alpha wrote:
> Hi Florian,
> 
> On 11/19/20 5:35 PM, Florian Weimer wrote:
>> * Matheus Castanho:
>>
>>> That should work for shared libc, but in the static case we may also
>>> hit the same problem: trying to access the TLS to read hwcap2 before
>>> it has been initialized, but this time in csu/libc-tls.c
>>
>> Ahh.
>>
>>> Is there a way to also check if we are in the static startup code at
>>> compile time? If not, I'm afraid I'll have to keep the check for the
>>> thread pointer.
>>
>> Is the thread pointer in a regular register?  Then you could install a
>> fake TCB early on that has a zero bit in the right place.
>>
>> The other option would be to always use the old interface for !SHARED.
>> Just saying. 8-)
>>
> 
> I believe adding the fake TCB is a bit out of the scope of this patch, and I'd
> also prefer to keep the same behavior for both static and shared libcs, so we
> avoid surprises in the future.
> 
> Do you see this as a blocker for merging this patch?

Florian, any thoughts?

Thanks,
Matheus Castanho
  
Florian Weimer Dec. 1, 2020, 1:11 p.m. UTC | #8
* Matheus Castanho via Libc-alpha:

> Hi Florian,
>
> On 11/19/20 5:35 PM, Florian Weimer wrote:
>> * Matheus Castanho:
>> 
>>> That should work for shared libc, but in the static case we may also
>>> hit the same problem: trying to access the TLS to read hwcap2 before
>>> it has been initialized, but this time in csu/libc-tls.c
>> 
>> Ahh.
>> 
>>> Is there a way to also check if we are in the static startup code at
>>> compile time? If not, I'm afraid I'll have to keep the check for the
>>> thread pointer.
>> 
>> Is the thread pointer in a regular register?  Then you could install a
>> fake TCB early on that has a zero bit in the right place.
>> 
>> The other option would be to always use the old interface for !SHARED.
>> Just saying. 8-)
>> 
>
> I believe adding the fake TCB is a bit out of the scope of this patch, and I'd
> also prefer to keep the same behavior for both static and shared libcs, so we
> avoid surprises in the future.
>
> Do you see this as a blocker for merging this patch?

I think the run-time check in the shared builds is unnecessary.  I don't
have an opinion on the static case, but I think ld.so should use the
legacy interface unconditionally, and shared libc.so should use a
dynamic check while assuming that the TCB is initialized.  If we can
avoid making things harder for the branch predict, we should do so.

In the end, it's your port though, and I don't have a strong opinion
here.

Thanks,
Florian
  
Adhemerval Zanella Netto Dec. 1, 2020, 1:35 p.m. UTC | #9
On 01/12/2020 10:11, Florian Weimer via Libc-alpha wrote:
> * Matheus Castanho via Libc-alpha:
> 
>> Hi Florian,
>>
>> On 11/19/20 5:35 PM, Florian Weimer wrote:
>>> * Matheus Castanho:
>>>
>>>> That should work for shared libc, but in the static case we may also
>>>> hit the same problem: trying to access the TLS to read hwcap2 before
>>>> it has been initialized, but this time in csu/libc-tls.c
>>>
>>> Ahh.
>>>
>>>> Is there a way to also check if we are in the static startup code at
>>>> compile time? If not, I'm afraid I'll have to keep the check for the
>>>> thread pointer.
>>>
>>> Is the thread pointer in a regular register?  Then you could install a
>>> fake TCB early on that has a zero bit in the right place.
>>>
>>> The other option would be to always use the old interface for !SHARED.
>>> Just saying. 8-)
>>>
>>
>> I believe adding the fake TCB is a bit out of the scope of this patch, and I'd
>> also prefer to keep the same behavior for both static and shared libcs, so we
>> avoid surprises in the future.
>>
>> Do you see this as a blocker for merging this patch?
> 
> I think the run-time check in the shared builds is unnecessary.  I don't
> have an opinion on the static case, but I think ld.so should use the
> legacy interface unconditionally, and shared libc.so should use a
> dynamic check while assuming that the TCB is initialized.  If we can
> avoid making things harder for the branch predict, we should do so.
I agree, for static case it might be more complicate to disentangle the
loader code so the check might be required.

> 
> In the end, it's your port though, and I don't have a strong opinion
> here.
> 
> Thanks,
> Florian
>
  
Matheus Castanho Dec. 3, 2020, 5:19 p.m. UTC | #10
On 12/1/20 10:35 AM, Adhemerval Zanella via Libc-alpha wrote:
> 
> 
> On 01/12/2020 10:11, Florian Weimer via Libc-alpha wrote:
>> * Matheus Castanho via Libc-alpha:
>>
>>> Hi Florian,
>>>
>>> On 11/19/20 5:35 PM, Florian Weimer wrote:
>>>> * Matheus Castanho:
>>>>
>>>>> That should work for shared libc, but in the static case we may also
>>>>> hit the same problem: trying to access the TLS to read hwcap2 before
>>>>> it has been initialized, but this time in csu/libc-tls.c
>>>>
>>>> Ahh.
>>>>
>>>>> Is there a way to also check if we are in the static startup code at
>>>>> compile time? If not, I'm afraid I'll have to keep the check for the
>>>>> thread pointer.
>>>>
>>>> Is the thread pointer in a regular register?  Then you could install a
>>>> fake TCB early on that has a zero bit in the right place.
>>>>
>>>> The other option would be to always use the old interface for !SHARED.
>>>> Just saying. 8-)
>>>>
>>>
>>> I believe adding the fake TCB is a bit out of the scope of this patch, and I'd
>>> also prefer to keep the same behavior for both static and shared libcs, so we
>>> avoid surprises in the future.
>>>
>>> Do you see this as a blocker for merging this patch?
>>
>> I think the run-time check in the shared builds is unnecessary.  I don't
>> have an opinion on the static case, but I think ld.so should use the
>> legacy interface unconditionally, and shared libc.so should use a
>> dynamic check while assuming that the TCB is initialized.  If we can
>> avoid making things harder for the branch predict, we should do so.
> I agree, for static case it might be more complicate to disentangle the
> loader code so the check might be required.
> 

[snip]

Ok, thanks for the feedback. I believe I was able to address this in v2 [0].

[0] https://sourceware.org/pipermail/libc-alpha/2020-December/120353.html

Thanks,
Matheus Castanho
  

Patch

diff --git a/sysdeps/powerpc/powerpc32/sysdep.h b/sysdeps/powerpc/powerpc32/sysdep.h
index 829eec266a..bff18bdc8b 100644
--- a/sysdeps/powerpc/powerpc32/sysdep.h
+++ b/sysdeps/powerpc/powerpc32/sysdep.h
@@ -90,9 +90,12 @@  GOT_LABEL:			;					      \
   cfi_endproc;								      \
   ASM_SIZE_DIRECTIVE(name)
 
-#define DO_CALL(syscall)						      \
-    li 0,syscall;							      \
-    sc
+#define DO_CALL(syscall) \
+	li 0,syscall; \
+	DO_CALL_SC
+
+#define DO_CALL_SC \
+	sc
 
 #undef JUMPTARGET
 #ifdef PIC
@@ -106,14 +109,20 @@  GOT_LABEL:			;					      \
 # define HIDDEN_JUMPTARGET(name) __GI_##name##@local
 #endif
 
+#define TAIL_CALL_SYSCALL_ERROR \
+    b __syscall_error@local
+
 #define PSEUDO(name, syscall_name, args)				      \
   .section ".text";							      \
   ENTRY (name)								      \
     DO_CALL (SYS_ify (syscall_name));
 
+#define RET_SC \
+    bnslr+;
+
 #define PSEUDO_RET							      \
-    bnslr+;								      \
-    b __syscall_error@local
+    RET_SC;								      \
+    TAIL_CALL_SYSCALL_ERROR
 #define ret PSEUDO_RET
 
 #undef	PSEUDO_END
diff --git a/sysdeps/powerpc/powerpc64/sysdep.h b/sysdeps/powerpc/powerpc64/sysdep.h
index d557098898..2d7dde64da 100644
--- a/sysdeps/powerpc/powerpc64/sysdep.h
+++ b/sysdeps/powerpc/powerpc64/sysdep.h
@@ -17,6 +17,7 @@ 
    <https://www.gnu.org/licenses/>.  */
 
 #include <sysdeps/powerpc/sysdep.h>
+#include <tls.h>
 
 #ifdef __ASSEMBLER__
 
@@ -263,10 +264,72 @@  LT_LABELSUFFIX(name,_name_end): ; \
   TRACEBACK_MASK(name,mask);	\
   END_2(name)
 
+/* We will allocate a new frame to save LR and the non-volatile register used to
+   read the TCB when checking for scv support on syscall code.  We actually just
+   need the minimum frame size plus room for 1 reg (64 bits).  But the ABI
+   mandates stack frames should be aligned at 16 Bytes, so we end up allocating
+   a bit more space then what will actually be used.  */
+#define SCV_FRAME_SIZE (FRAME_MIN_SIZE+16)
+#define SCV_FRAME_NVOLREG_SAVE FRAME_MIN_SIZE
+
+/* Allocate frame and save register */
+#define NVOLREG_SAVE \
+    stdu r1,-SCV_FRAME_SIZE(r1); \
+    std r31,SCV_FRAME_NVOLREG_SAVE(r1); \
+    cfi_adjust_cfa_offset(SCV_FRAME_SIZE);
+
+/* Restore register and destroy frame */
+#define NVOLREG_RESTORE	\
+    ld r31,SCV_FRAME_NVOLREG_SAVE(r1); \
+    addi r1,r1,SCV_FRAME_SIZE; \
+    cfi_adjust_cfa_offset(-SCV_FRAME_SIZE);
+
+/* Check PPC_FEATURE2_SCV bit from hwcap2 in the TCB and update CR0
+ * accordingly.  First, we check if the thread pointer != 0, so we don't try to
+ * access the TCB before it has been initialized, e.g. inside the dynamic
+ * loader.  If it is already initialized, check if scv is available.  On both
+ * negative cases, go to JUMPFALSE (label given by the macro's caller).  We
+ * save the value we read from the TCB in a non-volatile register so we can
+ * reuse it later when exiting from the syscall in PSEUDO_RET.  */
+    .macro CHECK_SCV_SUPPORT REG JUMPFALSE
+
+    /* Check if thread pointer has already been setup */
+    cmpdi r13,0
+    beq \JUMPFALSE
+
+    /* Read PPC_FEATURE2_SCV from TCB and store it in REG */
+    ld \REG,TCB_HWCAP(PT_THREAD_POINTER)
+    andis. \REG,\REG,PPC_FEATURE2_SCV>>16
+
+    beq \JUMPFALSE
+    .endm
+
+/* Before doing the syscall, check if we can use scv.  scv is supported by P9
+ * and later with Linux v5.9 and later.  If so, use it.  Otherwise, fallback to
+ * sc.  We use a non-volatile register to save hwcap2 from the TCB, so we need
+ * to save its content beforehand. */
 #define DO_CALL(syscall) \
-    li 0,syscall; \
+    li r0,syscall; \
+    NVOLREG_SAVE; \
+    CHECK_SCV_SUPPORT r31 0f; \
+    DO_CALL_SCV; \
+    b 1f; \
+0:  DO_CALL_SC; \
+1:
+
+/* DO_CALL_SC and DO_CALL_SCV expect the syscall number to be loaded on r0.  */
+#define DO_CALL_SC \
     sc
 
+#define DO_CALL_SCV \
+    mflr r9; \
+    std r9,FRAME_LR_SAVE(r1); \
+    cfi_offset(lr,FRAME_LR_SAVE); \
+    scv 0; \
+    ld r9,FRAME_LR_SAVE(r1); \
+    mtlr r9; \
+    cfi_restore(lr);
+
 /* ppc64 is always PIC */
 #undef JUMPTARGET
 #define JUMPTARGET(name) FUNC_LABEL(name)
@@ -304,9 +367,26 @@  LT_LABELSUFFIX(name,_name_end): ; \
     .endif
 #endif
 
+/* This should only be called after a DO_CALL. In such cases, r31 contains the
+ * value of PPC_FEATURE2_SCV read from hwcap2 by CHECK_SCV_SUPPORT.  If it is
+ * set, we know we have entered the kernel using scv, so handle the return code
+ * accordingly.  */
 #define PSEUDO_RET \
-    bnslr+; \
-    TAIL_CALL_SYSCALL_ERROR
+    cmpdi cr5,r31,0; \
+    NVOLREG_RESTORE; \
+    beq cr5,0f; \
+    RET_SCV; \
+    b 1f; \
+0:  RET_SC; \
+1:  TAIL_CALL_SYSCALL_ERROR
+
+#define RET_SCV \
+    cmpdi r3,0; \
+    bgelr+; \
+    neg r3,r3;
+
+#define RET_SC \
+    bnslr+;
 
 #define ret PSEUDO_RET
 
@@ -319,7 +399,9 @@  LT_LABELSUFFIX(name,_name_end): ; \
   ENTRY (name);						\
   DO_CALL (SYS_ify (syscall_name))
 
+/* This should only be called after a DO_CALL.  */
 #define PSEUDO_RET_NOERRNO \
+    NVOLREG_RESTORE; \
     blr
 
 #define ret_NOERRNO PSEUDO_RET_NOERRNO
@@ -333,7 +415,9 @@  LT_LABELSUFFIX(name,_name_end): ; \
   ENTRY (name);						\
   DO_CALL (SYS_ify (syscall_name))
 
+/* This should only be called after a DO_CALL.  */
 #define PSEUDO_RET_ERRVAL \
+    NVOLREG_RESTORE; \
     blr
 
 #define ret_ERRVAL PSEUDO_RET_ERRVAL
diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S b/sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S
index b30641c805..fc496fa671 100644
--- a/sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S
+++ b/sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S
@@ -68,7 +68,8 @@  ENTRY (__clone)
 	cfi_endproc
 
 	/* Do the call.  */
-	DO_CALL(SYS_ify(clone))
+	li 	r0,SYS_ify(clone)
+	DO_CALL_SC
 
 	/* Check for child process.  */
 	cmpdi	cr1,r3,0
@@ -82,7 +83,8 @@  ENTRY (__clone)
 	bctrl
 	ld	r2,FRAME_TOC_SAVE(r1)
 
-	DO_CALL(SYS_ify(exit))
+	li	r0,(SYS_ify(exit))
+	DO_CALL_SC
 	/* We won't ever get here but provide a nop so that the linker
 	   will insert a toc adjusting stub if necessary.  */
 	nop
@@ -104,7 +106,8 @@  L(parent):
 	cfi_restore(r30)
 	cfi_restore(r31)
 
-	PSEUDO_RET
+	RET_SC
+	TAIL_CALL_SYSCALL_ERROR
 
 END (__clone)
 
diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc64/vfork.S b/sysdeps/unix/sysv/linux/powerpc/powerpc64/vfork.S
index 17199fb14a..a71f69e929 100644
--- a/sysdeps/unix/sysv/linux/powerpc/powerpc64/vfork.S
+++ b/sysdeps/unix/sysv/linux/powerpc/powerpc64/vfork.S
@@ -28,9 +28,11 @@ 
 ENTRY (__vfork)
 	CALL_MCOUNT 0
 
-	DO_CALL (SYS_ify (vfork))
+	li r0,SYS_ify (vfork)
+	DO_CALL_SC
 
-	PSEUDO_RET
+	RET_SC
+	TAIL_CALL_SYSCALL_ERROR
 
 PSEUDO_END (__vfork)
 libc_hidden_def (__vfork)
diff --git a/sysdeps/unix/sysv/linux/powerpc/syscall.S b/sysdeps/unix/sysv/linux/powerpc/syscall.S
index 48dade4642..23ce2f69c9 100644
--- a/sysdeps/unix/sysv/linux/powerpc/syscall.S
+++ b/sysdeps/unix/sysv/linux/powerpc/syscall.S
@@ -25,6 +25,13 @@  ENTRY (syscall)
 	mr   r6,r7
 	mr   r7,r8
 	mr   r8,r9
-	sc
-	PSEUDO_RET
+#if defined(__PPC64__) || defined(__powerpc64__)
+	CHECK_SCV_SUPPORT r9 0f
+	DO_CALL_SCV
+	RET_SCV
+	b 1f
+#endif
+0:	DO_CALL_SC
+	RET_SC
+1:	TAIL_CALL_SYSCALL_ERROR
 PSEUDO_END (syscall)
diff --git a/sysdeps/unix/sysv/linux/powerpc/sysdep.h b/sysdeps/unix/sysv/linux/powerpc/sysdep.h
index b2bca598b9..19f4321c6b 100644
--- a/sysdeps/unix/sysv/linux/powerpc/sysdep.h
+++ b/sysdeps/unix/sysv/linux/powerpc/sysdep.h
@@ -64,39 +64,69 @@ 
 #define INTERNAL_VSYSCALL_CALL(funcptr, nr, args...)			\
   INTERNAL_VSYSCALL_CALL_TYPE(funcptr, long int, nr, args)
 
+#define DECLARE_REGS				\
+  register long int r0  __asm__ ("r0");		\
+  register long int r3  __asm__ ("r3");		\
+  register long int r4  __asm__ ("r4");		\
+  register long int r5  __asm__ ("r5");		\
+  register long int r6  __asm__ ("r6");		\
+  register long int r7  __asm__ ("r7");		\
+  register long int r8  __asm__ ("r8");
+
+#define SYSCALL_SCV(nr)				\
+  ({						\
+    __asm__ __volatile__			\
+      ("scv 0\n\t"				\
+       "0:"					\
+       : "=&r" (r0),				\
+	 "=&r" (r3), "=&r" (r4), "=&r" (r5),	\
+	 "=&r" (r6), "=&r" (r7), "=&r" (r8)	\
+       : ASM_INPUT_##nr			\
+       : "r9", "r10", "r11", "r12",		\
+	 "lr", "ctr", "memory");		\
+    r3;					\
+  })
 
-#undef INTERNAL_SYSCALL
-#define INTERNAL_SYSCALL_NCS(name, nr, args...) \
-  ({									\
-    register long int r0  __asm__ ("r0");				\
-    register long int r3  __asm__ ("r3");				\
-    register long int r4  __asm__ ("r4");				\
-    register long int r5  __asm__ ("r5");				\
-    register long int r6  __asm__ ("r6");				\
-    register long int r7  __asm__ ("r7");				\
-    register long int r8  __asm__ ("r8");				\
-    LOADARGS_##nr (name, ##args);					\
-    __asm__ __volatile__						\
-      ("sc\n\t"								\
-       "mfcr  %0\n\t"							\
-       "0:"								\
-       : "=&r" (r0),							\
-         "=&r" (r3), "=&r" (r4), "=&r" (r5),				\
-         "=&r" (r6), "=&r" (r7), "=&r" (r8)				\
-       : ASM_INPUT_##nr							\
-       : "r9", "r10", "r11", "r12",					\
-         "cr0", "ctr", "memory");					\
-    r0 & (1 << 28) ? -r3 : r3;						\
+#define SYSCALL_SC(nr)				\
+  ({						\
+    __asm__ __volatile__			\
+      ("sc\n\t"				\
+       "mfcr %0\n\t"				\
+       "0:"					\
+       : "=&r" (r0),				\
+	 "=&r" (r3), "=&r" (r4), "=&r" (r5),	\
+	 "=&r" (r6), "=&r" (r7), "=&r" (r8)	\
+       : ASM_INPUT_##nr			\
+       : "r9", "r10", "r11", "r12",		\
+	 "cr0", "ctr", "memory");		\
+    r0 & (1 << 28) ? -r3 : r3;			\
   })
-#define INTERNAL_SYSCALL(name, nr, args...)				\
-  INTERNAL_SYSCALL_NCS (__NR_##name, nr, args)
 
 #if defined(__PPC64__) || defined(__powerpc64__)
 # define SYSCALL_ARG_SIZE 8
+
+# define INTERNAL_SYSCALL_NCS(name, nr, args...)			\
+  ({									\
+    DECLARE_REGS;							\
+    LOADARGS_##nr (name, ##args);					\
+    __thread_register != 0 && THREAD_GET_HWCAP() & PPC_FEATURE2_SCV ?	\
+      SYSCALL_SCV(nr) : SYSCALL_SC(nr);					\
+  })
 #else
 # define SYSCALL_ARG_SIZE 4
+
+# define INTERNAL_SYSCALL_NCS(name, nr, args...)	\
+  ({							\
+    DECLARE_REGS;					\
+    LOADARGS_##nr (name, ##args);			\
+    SYSCALL_SC(nr);					\
+  })
 #endif
 
+#undef INTERNAL_SYSCALL
+#define INTERNAL_SYSCALL(name, nr, args...)				\
+  INTERNAL_SYSCALL_NCS (__NR_##name, nr, args)
+
 #define LOADARGS_0(name, dummy) \
 	r0 = name
 #define LOADARGS_1(name, __arg1) \