[v2,11/19] nptl: Use tidlock when accessing TID on pthread_setaffinity

Message ID 20210823195047.543237-12-adhemerval.zanella@linaro.org
State Superseded
Headers
Series Fix various NPTL synchronization issues |

Checks

Context Check Description
dj/TryBot-apply_patch success Patch applied to master at the time it was sent

Commit Message

Adhemerval Zanella Netto Aug. 23, 2021, 7:50 p.m. UTC
  Checked on x86_64-linux-gnu.
---
 nptl/pthread_setaffinity.c | 22 +++++++++++++++-------
 1 file changed, 15 insertions(+), 7 deletions(-)
  

Comments

Florian Weimer Aug. 26, 2021, 2:25 p.m. UTC | #1
* Adhemerval Zanella:

> Checked on x86_64-linux-gnu.
> ---
>  nptl/pthread_setaffinity.c | 22 +++++++++++++++-------
>  1 file changed, 15 insertions(+), 7 deletions(-)
>
> diff --git a/nptl/pthread_setaffinity.c b/nptl/pthread_setaffinity.c
> index 3bfdc63e19..beb61836a6 100644
> --- a/nptl/pthread_setaffinity.c
> +++ b/nptl/pthread_setaffinity.c
> @@ -27,15 +27,23 @@ int
>  __pthread_setaffinity_new (pthread_t th, size_t cpusetsize,
>  			   const cpu_set_t *cpuset)
>  {
> -  const struct pthread *pd = (const struct pthread *) th;
> -  int res;
> +  struct pthread *pd = (struct pthread *) th;
>  
> -  res = INTERNAL_SYSCALL_CALL (sched_setaffinity, pd->tid, cpusetsize,
> -			       cpuset);
> +  /* Block all signal, since the lock is recursive and used on pthread_cancel
> +     (which should be async-signal-safe).  */
> +  sigset_t oldmask;
> +  __libc_signal_block_all (&oldmask);
> +  lll_lock (pd->tidlock, LLL_PRIVATE);
>  
> -  return (INTERNAL_SYSCALL_ERROR_P (res)
> -	  ? INTERNAL_SYSCALL_ERRNO (res)
> -	  : 0);
> +  int res = pd->tid == 0
> +	    ? ESRCH
> +	    : INTERNAL_SYSCALL_CALL (sched_setaffinity, pd->tid, cpusetsize,
> +				     cpuset);
> +
> +  lll_unlock (pd->tidlock, LLL_PRIVATE);
> +  __libc_signal_restore_set (&oldmask);
> +
> +  return res;

Same issue regarding ESRCH and pthread_cancel.  Here we can just return
0 in case the thread is gone, I think.

Thanks,
Florian
  
Adhemerval Zanella Netto Aug. 26, 2021, 5:31 p.m. UTC | #2
On 26/08/2021 11:25, Florian Weimer wrote:
> * Adhemerval Zanella:
> 
>> Checked on x86_64-linux-gnu.
>> ---
>>  nptl/pthread_setaffinity.c | 22 +++++++++++++++-------
>>  1 file changed, 15 insertions(+), 7 deletions(-)
>>
>> diff --git a/nptl/pthread_setaffinity.c b/nptl/pthread_setaffinity.c
>> index 3bfdc63e19..beb61836a6 100644
>> --- a/nptl/pthread_setaffinity.c
>> +++ b/nptl/pthread_setaffinity.c
>> @@ -27,15 +27,23 @@ int
>>  __pthread_setaffinity_new (pthread_t th, size_t cpusetsize,
>>  			   const cpu_set_t *cpuset)
>>  {
>> -  const struct pthread *pd = (const struct pthread *) th;
>> -  int res;
>> +  struct pthread *pd = (struct pthread *) th;
>>  
>> -  res = INTERNAL_SYSCALL_CALL (sched_setaffinity, pd->tid, cpusetsize,
>> -			       cpuset);
>> +  /* Block all signal, since the lock is recursive and used on pthread_cancel
>> +     (which should be async-signal-safe).  */
>> +  sigset_t oldmask;
>> +  __libc_signal_block_all (&oldmask);
>> +  lll_lock (pd->tidlock, LLL_PRIVATE);
>>  
>> -  return (INTERNAL_SYSCALL_ERROR_P (res)
>> -	  ? INTERNAL_SYSCALL_ERRNO (res)
>> -	  : 0);
>> +  int res = pd->tid == 0
>> +	    ? ESRCH
>> +	    : INTERNAL_SYSCALL_CALL (sched_setaffinity, pd->tid, cpusetsize,
>> +				     cpuset);
>> +
>> +  lll_unlock (pd->tidlock, LLL_PRIVATE);
>> +  __libc_signal_restore_set (&oldmask);
>> +
>> +  return res;
> 
> Same issue regarding ESRCH and pthread_cancel. 

Ack.

> Here we can just return > 0 in case the thread is gone, I think.

I still think ESRCH is the correct returned code here.
  

Patch

diff --git a/nptl/pthread_setaffinity.c b/nptl/pthread_setaffinity.c
index 3bfdc63e19..beb61836a6 100644
--- a/nptl/pthread_setaffinity.c
+++ b/nptl/pthread_setaffinity.c
@@ -27,15 +27,23 @@  int
 __pthread_setaffinity_new (pthread_t th, size_t cpusetsize,
 			   const cpu_set_t *cpuset)
 {
-  const struct pthread *pd = (const struct pthread *) th;
-  int res;
+  struct pthread *pd = (struct pthread *) th;
 
-  res = INTERNAL_SYSCALL_CALL (sched_setaffinity, pd->tid, cpusetsize,
-			       cpuset);
+  /* Block all signal, since the lock is recursive and used on pthread_cancel
+     (which should be async-signal-safe).  */
+  sigset_t oldmask;
+  __libc_signal_block_all (&oldmask);
+  lll_lock (pd->tidlock, LLL_PRIVATE);
 
-  return (INTERNAL_SYSCALL_ERROR_P (res)
-	  ? INTERNAL_SYSCALL_ERRNO (res)
-	  : 0);
+  int res = pd->tid == 0
+	    ? ESRCH
+	    : INTERNAL_SYSCALL_CALL (sched_setaffinity, pd->tid, cpusetsize,
+				     cpuset);
+
+  lll_unlock (pd->tidlock, LLL_PRIVATE);
+  __libc_signal_restore_set (&oldmask);
+
+  return res;
 }
 versioned_symbol (libc, __pthread_setaffinity_new,
 		  pthread_setaffinity_np, GLIBC_2_34);