[RFC,v3,03/23] sysdeps/wait: Use waitid if avaliable

Message ID 8736iuujdu.fsf@xmission.com
State Not applicable
Headers

Commit Message

Eric W. Biederman July 25, 2019, 5:14 p.m. UTC
  Arnd Bergmann <arnd@arndb.de> writes:

> On Thu, Jul 25, 2019 at 6:40 AM Rich Felker <dalias@libc.org> wrote:
>> On Wed, Jul 24, 2019 at 05:04:53PM -0700, Alistair Francis wrote:
>> > On Tue, Jul 23, 2019 at 1:45 AM Arnd Bergmann <arnd@arndb.de> wrote:
>> > >
>> > > Sounds good to me, the debate over what rusage to use should not hold
>> > > up the review of the rest of that syscall.
>> >
>> > I'm unclear what the final decision is here. What is the solution are
>> > we going to have wait4() or add P_PROCESS_PGID to waitid()?
>> >
>> > As well as that what is the solution to current implementations? If we
>> > add wait4() then there isn't an issue (and I can drop this patch) but
>> > if we add P_PROCESS_PGID then we will need a way to handle kernels
>> > with waitid() but no P_PROCESS_PGID. Although my new plan is to only
>> > use the waitid syscall if we don't have waitpid or wait4 so it seems
>> > like this will only affect RV32 for the time being.
>>
>> I would really like some indication which solution will be taken,
>> since it impacts choices that will need to be made in musl very soon.
>> My favorite outcome would be bringing back wait4 for rv32 (and
>> no-time32 archs in general) *and* adding P_PROCESS_PGID. In the short
>> term, just using wait4 would be the simplest and cleanest for us (same
>> as all other archs, no extra case to deal with), but in the long term
>> there may be value in having rusage that can represent more than 68
>> cpu-years spent by a process (seems plausible with large numbers of
>> cores).
>
> Based on the feedback from Linus and Eric, the most likely outcome
> at the moment seems to be an extension of waitid() to allow
> P_PGID with id=0 like BSD does, and not bring back wait4() or
> add P_PROCESS_PGID.
>
> So far, I don't think anyone has proposed an actual kernel patch.
> I was hoping that Eric would do it, but I could also send it if he's
> otherwise busy.

So here is what I am looking at.  It still needs to be tested
and the description needs to be improved so that it properly credits
everyone.  However I think I have the major stroeks correct.

From: "Eric W. Biederman" <ebiederm@xmission.com>
Date: Tue, 23 Jul 2019 07:44:46 -0500
Subject: [PATCH] waitid: Add support for waiting for the current process group

It was recently discovered that the linux version of waitid is not a
superset of the other wait functions because it does not include
support for waiting for the current process group.  This has two
downsides.  An extra system call is needed to get the current process
group, and a signal could come in between the system call that
retrieved the process gorup and the call to waitid that changes the
current process group.

Allow userspace to avoid both of those issues by defining
idtype == P_PGID and id == 0 to mean wait for the caller's process
group at the time of the call.

Arguments can be made for using a different choice of idtype and id
for this case but the BSDs already use this P_PGID and 0 to indicate
waiting for the current process's process group.  So be nice to user
space programmers and don't introduce an unnecessary incompatibility.

Some people have noted that the posix description is that
waitpid will wait for the current process group, and that in
the presence of pthreads that process group can change.  To get
clarity on this issue I looked at XNU, FreeBSD, and Luminos.  All of
those flavors of unix waited for the current process group at the
time of call and as written could not adapt to the process group
changing after the call.

At one point Linux did adapt to the current process group changing but
that stopped in 161550d74c07 ("pid: sys_wait... fixes").  It has been
over 11 years since Linux has that behavior, no programs that fail
with the change in behavior have been reported, and I could not
find any other unix that does this.  So I think it is safe to clarify
the definition of current process group, to current process group
at the time of the wait function.

Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
---
 kernel/exit.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)
  

Comments

Christian Brauner July 25, 2019, 5:30 p.m. UTC | #1
On July 25, 2019 7:14:05 PM GMT+02:00, ebiederm@xmission.com wrote:
>Arnd Bergmann <arnd@arndb.de> writes:
>
>> On Thu, Jul 25, 2019 at 6:40 AM Rich Felker <dalias@libc.org> wrote:
>>> On Wed, Jul 24, 2019 at 05:04:53PM -0700, Alistair Francis wrote:
>>> > On Tue, Jul 23, 2019 at 1:45 AM Arnd Bergmann <arnd@arndb.de>
>wrote:
>>> > >
>>> > > Sounds good to me, the debate over what rusage to use should not
>hold
>>> > > up the review of the rest of that syscall.
>>> >
>>> > I'm unclear what the final decision is here. What is the solution
>are
>>> > we going to have wait4() or add P_PROCESS_PGID to waitid()?
>>> >
>>> > As well as that what is the solution to current implementations?
>If we
>>> > add wait4() then there isn't an issue (and I can drop this patch)
>but
>>> > if we add P_PROCESS_PGID then we will need a way to handle kernels
>>> > with waitid() but no P_PROCESS_PGID. Although my new plan is to
>only
>>> > use the waitid syscall if we don't have waitpid or wait4 so it
>seems
>>> > like this will only affect RV32 for the time being.
>>>
>>> I would really like some indication which solution will be taken,
>>> since it impacts choices that will need to be made in musl very
>soon.
>>> My favorite outcome would be bringing back wait4 for rv32 (and
>>> no-time32 archs in general) *and* adding P_PROCESS_PGID. In the
>short
>>> term, just using wait4 would be the simplest and cleanest for us
>(same
>>> as all other archs, no extra case to deal with), but in the long
>term
>>> there may be value in having rusage that can represent more than 68
>>> cpu-years spent by a process (seems plausible with large numbers of
>>> cores).
>>
>> Based on the feedback from Linus and Eric, the most likely outcome
>> at the moment seems to be an extension of waitid() to allow
>> P_PGID with id=0 like BSD does, and not bring back wait4() or
>> add P_PROCESS_PGID.
>>
>> So far, I don't think anyone has proposed an actual kernel patch.
>> I was hoping that Eric would do it, but I could also send it if he's
>> otherwise busy.
>
>So here is what I am looking at.  It still needs to be tested
>and the description needs to be improved so that it properly credits
>everyone.  However I think I have the major stroeks correct.
>
>From: "Eric W. Biederman" <ebiederm@xmission.com>
>Date: Tue, 23 Jul 2019 07:44:46 -0500
>Subject: [PATCH] waitid: Add support for waiting for the current
>process group
>
>It was recently discovered that the linux version of waitid is not a
>superset of the other wait functions because it does not include
>support for waiting for the current process group.  This has two
>downsides.  An extra system call is needed to get the current process
>group, and a signal could come in between the system call that
>retrieved the process gorup and the call to waitid that changes the
>current process group.
>
>Allow userspace to avoid both of those issues by defining
>idtype == P_PGID and id == 0 to mean wait for the caller's process
>group at the time of the call.
>
>Arguments can be made for using a different choice of idtype and id
>for this case but the BSDs already use this P_PGID and 0 to indicate
>waiting for the current process's process group.  So be nice to user
>space programmers and don't introduce an unnecessary incompatibility.
>
>Some people have noted that the posix description is that
>waitpid will wait for the current process group, and that in
>the presence of pthreads that process group can change.  To get
>clarity on this issue I looked at XNU, FreeBSD, and Luminos.  All of
>those flavors of unix waited for the current process group at the
>time of call and as written could not adapt to the process group
>changing after the call.
>
>At one point Linux did adapt to the current process group changing but
>that stopped in 161550d74c07 ("pid: sys_wait... fixes").  It has been
>over 11 years since Linux has that behavior, no programs that fail
>with the change in behavior have been reported, and I could not
>find any other unix that does this.  So I think it is safe to clarify
>the definition of current process group, to current process group
>at the time of the wait function.
>
>Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
>---
> kernel/exit.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
>diff --git a/kernel/exit.c b/kernel/exit.c
>index a75b6a7f458a..3d86930f035e 100644
>--- a/kernel/exit.c
>+++ b/kernel/exit.c
>@@ -1577,14 +1577,16 @@ static long kernel_waitid(int which, pid_t
>upid, struct waitid_info *infop,
> 		break;
> 	case P_PGID:
> 		type = PIDTYPE_PGID;
>-		if (upid <= 0)
>+		if (upid < 0)
> 			return -EINVAL;
>+		if (upid == 0)
>+			pid = get_pid(task_pgrp(current));
> 		break;
> 	default:
> 		return -EINVAL;
> 	}
> 
>-	if (type < PIDTYPE_MAX)
>+	if ((type < PIDTYPE_MAX) && !pid)
> 		pid = find_get_pid(upid);
> 
> 	wo.wo_type	= type;

Eric, mind if I send this out alongside the P_PIDFD patchset and put in a test for it?

Christian
  
Alistair Francis July 26, 2019, 11:35 p.m. UTC | #2
On Thu, Jul 25, 2019 at 10:14 AM Eric W. Biederman
<ebiederm@xmission.com> wrote:
>
> Arnd Bergmann <arnd@arndb.de> writes:
>
> > On Thu, Jul 25, 2019 at 6:40 AM Rich Felker <dalias@libc.org> wrote:
> >> On Wed, Jul 24, 2019 at 05:04:53PM -0700, Alistair Francis wrote:
> >> > On Tue, Jul 23, 2019 at 1:45 AM Arnd Bergmann <arnd@arndb.de> wrote:
> >> > >
> >> > > Sounds good to me, the debate over what rusage to use should not hold
> >> > > up the review of the rest of that syscall.
> >> >
> >> > I'm unclear what the final decision is here. What is the solution are
> >> > we going to have wait4() or add P_PROCESS_PGID to waitid()?
> >> >
> >> > As well as that what is the solution to current implementations? If we
> >> > add wait4() then there isn't an issue (and I can drop this patch) but
> >> > if we add P_PROCESS_PGID then we will need a way to handle kernels
> >> > with waitid() but no P_PROCESS_PGID. Although my new plan is to only
> >> > use the waitid syscall if we don't have waitpid or wait4 so it seems
> >> > like this will only affect RV32 for the time being.
> >>
> >> I would really like some indication which solution will be taken,
> >> since it impacts choices that will need to be made in musl very soon.
> >> My favorite outcome would be bringing back wait4 for rv32 (and
> >> no-time32 archs in general) *and* adding P_PROCESS_PGID. In the short
> >> term, just using wait4 would be the simplest and cleanest for us (same
> >> as all other archs, no extra case to deal with), but in the long term
> >> there may be value in having rusage that can represent more than 68
> >> cpu-years spent by a process (seems plausible with large numbers of
> >> cores).
> >
> > Based on the feedback from Linus and Eric, the most likely outcome
> > at the moment seems to be an extension of waitid() to allow
> > P_PGID with id=0 like BSD does, and not bring back wait4() or
> > add P_PROCESS_PGID.
> >
> > So far, I don't think anyone has proposed an actual kernel patch.
> > I was hoping that Eric would do it, but I could also send it if he's
> > otherwise busy.
>
> So here is what I am looking at.  It still needs to be tested
> and the description needs to be improved so that it properly credits
> everyone.  However I think I have the major stroeks correct.
>
> From: "Eric W. Biederman" <ebiederm@xmission.com>
> Date: Tue, 23 Jul 2019 07:44:46 -0500
> Subject: [PATCH] waitid: Add support for waiting for the current process group
>
> It was recently discovered that the linux version of waitid is not a
> superset of the other wait functions because it does not include
> support for waiting for the current process group.  This has two
> downsides.  An extra system call is needed to get the current process
> group, and a signal could come in between the system call that
> retrieved the process gorup and the call to waitid that changes the
> current process group.
>
> Allow userspace to avoid both of those issues by defining
> idtype == P_PGID and id == 0 to mean wait for the caller's process
> group at the time of the call.
>
> Arguments can be made for using a different choice of idtype and id
> for this case but the BSDs already use this P_PGID and 0 to indicate
> waiting for the current process's process group.  So be nice to user
> space programmers and don't introduce an unnecessary incompatibility.
>
> Some people have noted that the posix description is that
> waitpid will wait for the current process group, and that in
> the presence of pthreads that process group can change.  To get
> clarity on this issue I looked at XNU, FreeBSD, and Luminos.  All of
> those flavors of unix waited for the current process group at the
> time of call and as written could not adapt to the process group
> changing after the call.
>
> At one point Linux did adapt to the current process group changing but
> that stopped in 161550d74c07 ("pid: sys_wait... fixes").  It has been
> over 11 years since Linux has that behavior, no programs that fail
> with the change in behavior have been reported, and I could not
> find any other unix that does this.  So I think it is safe to clarify
> the definition of current process group, to current process group
> at the time of the wait function.
>
> Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>

Will this land in 5.3? Also will it be back ported to stable kernels?

Do you mind keeping me in CC when you send the patch?

Alistair

> ---
>  kernel/exit.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/exit.c b/kernel/exit.c
> index a75b6a7f458a..3d86930f035e 100644
> --- a/kernel/exit.c
> +++ b/kernel/exit.c
> @@ -1577,14 +1577,16 @@ static long kernel_waitid(int which, pid_t upid, struct waitid_info *infop,
>                 break;
>         case P_PGID:
>                 type = PIDTYPE_PGID;
> -               if (upid <= 0)
> +               if (upid < 0)
>                         return -EINVAL;
> +               if (upid == 0)
> +                       pid = get_pid(task_pgrp(current));
>                 break;
>         default:
>                 return -EINVAL;
>         }
>
> -       if (type < PIDTYPE_MAX)
> +       if ((type < PIDTYPE_MAX) && !pid)
>                 pid = find_get_pid(upid);
>
>         wo.wo_type      = type;
> --
> 2.21.0.dirty
>
  
Alistair Francis Aug. 13, 2019, 10:22 p.m. UTC | #3
On Thu, Jul 25, 2019 at 10:30 AM Christian Brauner <christian@brauner.io> wrote:
>
> On July 25, 2019 7:14:05 PM GMT+02:00, ebiederm@xmission.com wrote:
> >Arnd Bergmann <arnd@arndb.de> writes:
> >
> >> On Thu, Jul 25, 2019 at 6:40 AM Rich Felker <dalias@libc.org> wrote:
> >>> On Wed, Jul 24, 2019 at 05:04:53PM -0700, Alistair Francis wrote:
> >>> > On Tue, Jul 23, 2019 at 1:45 AM Arnd Bergmann <arnd@arndb.de>
> >wrote:
> >>> > >
> >>> > > Sounds good to me, the debate over what rusage to use should not
> >hold
> >>> > > up the review of the rest of that syscall.
> >>> >
> >>> > I'm unclear what the final decision is here. What is the solution
> >are
> >>> > we going to have wait4() or add P_PROCESS_PGID to waitid()?
> >>> >
> >>> > As well as that what is the solution to current implementations?
> >If we
> >>> > add wait4() then there isn't an issue (and I can drop this patch)
> >but
> >>> > if we add P_PROCESS_PGID then we will need a way to handle kernels
> >>> > with waitid() but no P_PROCESS_PGID. Although my new plan is to
> >only
> >>> > use the waitid syscall if we don't have waitpid or wait4 so it
> >seems
> >>> > like this will only affect RV32 for the time being.
> >>>
> >>> I would really like some indication which solution will be taken,
> >>> since it impacts choices that will need to be made in musl very
> >soon.
> >>> My favorite outcome would be bringing back wait4 for rv32 (and
> >>> no-time32 archs in general) *and* adding P_PROCESS_PGID. In the
> >short
> >>> term, just using wait4 would be the simplest and cleanest for us
> >(same
> >>> as all other archs, no extra case to deal with), but in the long
> >term
> >>> there may be value in having rusage that can represent more than 68
> >>> cpu-years spent by a process (seems plausible with large numbers of
> >>> cores).
> >>
> >> Based on the feedback from Linus and Eric, the most likely outcome
> >> at the moment seems to be an extension of waitid() to allow
> >> P_PGID with id=0 like BSD does, and not bring back wait4() or
> >> add P_PROCESS_PGID.
> >>
> >> So far, I don't think anyone has proposed an actual kernel patch.
> >> I was hoping that Eric would do it, but I could also send it if he's
> >> otherwise busy.
> >
> >So here is what I am looking at.  It still needs to be tested
> >and the description needs to be improved so that it properly credits
> >everyone.  However I think I have the major stroeks correct.
> >
> >From: "Eric W. Biederman" <ebiederm@xmission.com>
> >Date: Tue, 23 Jul 2019 07:44:46 -0500
> >Subject: [PATCH] waitid: Add support for waiting for the current
> >process group
> >
> >It was recently discovered that the linux version of waitid is not a
> >superset of the other wait functions because it does not include
> >support for waiting for the current process group.  This has two
> >downsides.  An extra system call is needed to get the current process
> >group, and a signal could come in between the system call that
> >retrieved the process gorup and the call to waitid that changes the
> >current process group.
> >
> >Allow userspace to avoid both of those issues by defining
> >idtype == P_PGID and id == 0 to mean wait for the caller's process
> >group at the time of the call.
> >
> >Arguments can be made for using a different choice of idtype and id
> >for this case but the BSDs already use this P_PGID and 0 to indicate
> >waiting for the current process's process group.  So be nice to user
> >space programmers and don't introduce an unnecessary incompatibility.
> >
> >Some people have noted that the posix description is that
> >waitpid will wait for the current process group, and that in
> >the presence of pthreads that process group can change.  To get
> >clarity on this issue I looked at XNU, FreeBSD, and Luminos.  All of
> >those flavors of unix waited for the current process group at the
> >time of call and as written could not adapt to the process group
> >changing after the call.
> >
> >At one point Linux did adapt to the current process group changing but
> >that stopped in 161550d74c07 ("pid: sys_wait... fixes").  It has been
> >over 11 years since Linux has that behavior, no programs that fail
> >with the change in behavior have been reported, and I could not
> >find any other unix that does this.  So I think it is safe to clarify
> >the definition of current process group, to current process group
> >at the time of the wait function.
> >
> >Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
> >---
> > kernel/exit.c | 6 ++++--
> > 1 file changed, 4 insertions(+), 2 deletions(-)
> >
> >diff --git a/kernel/exit.c b/kernel/exit.c
> >index a75b6a7f458a..3d86930f035e 100644
> >--- a/kernel/exit.c
> >+++ b/kernel/exit.c
> >@@ -1577,14 +1577,16 @@ static long kernel_waitid(int which, pid_t
> >upid, struct waitid_info *infop,
> >               break;
> >       case P_PGID:
> >               type = PIDTYPE_PGID;
> >-              if (upid <= 0)
> >+              if (upid < 0)
> >                       return -EINVAL;
> >+              if (upid == 0)
> >+                      pid = get_pid(task_pgrp(current));
> >               break;
> >       default:
> >               return -EINVAL;
> >       }
> >
> >-      if (type < PIDTYPE_MAX)
> >+      if ((type < PIDTYPE_MAX) && !pid)
> >               pid = find_get_pid(upid);
> >
> >       wo.wo_type      = type;
>
> Eric, mind if I send this out alongside the P_PIDFD patchset and put in a test for it?

Was this ever sent?

Alistair

>
> Christian
  
Rich Felker Aug. 13, 2019, 11:11 p.m. UTC | #4
On Tue, Aug 13, 2019 at 03:22:02PM -0700, Alistair Francis wrote:
> On Thu, Jul 25, 2019 at 10:30 AM Christian Brauner <christian@brauner.io> wrote:
> >
> > On July 25, 2019 7:14:05 PM GMT+02:00, ebiederm@xmission.com wrote:
> > >Arnd Bergmann <arnd@arndb.de> writes:
> > >
> > >> On Thu, Jul 25, 2019 at 6:40 AM Rich Felker <dalias@libc.org> wrote:
> > >>> On Wed, Jul 24, 2019 at 05:04:53PM -0700, Alistair Francis wrote:
> > >>> > On Tue, Jul 23, 2019 at 1:45 AM Arnd Bergmann <arnd@arndb.de>
> > >wrote:
> > >>> > >
> > >>> > > Sounds good to me, the debate over what rusage to use should not
> > >hold
> > >>> > > up the review of the rest of that syscall.
> > >>> >
> > >>> > I'm unclear what the final decision is here. What is the solution
> > >are
> > >>> > we going to have wait4() or add P_PROCESS_PGID to waitid()?
> > >>> >
> > >>> > As well as that what is the solution to current implementations?
> > >If we
> > >>> > add wait4() then there isn't an issue (and I can drop this patch)
> > >but
> > >>> > if we add P_PROCESS_PGID then we will need a way to handle kernels
> > >>> > with waitid() but no P_PROCESS_PGID. Although my new plan is to
> > >only
> > >>> > use the waitid syscall if we don't have waitpid or wait4 so it
> > >seems
> > >>> > like this will only affect RV32 for the time being.
> > >>>
> > >>> I would really like some indication which solution will be taken,
> > >>> since it impacts choices that will need to be made in musl very
> > >soon.
> > >>> My favorite outcome would be bringing back wait4 for rv32 (and
> > >>> no-time32 archs in general) *and* adding P_PROCESS_PGID. In the
> > >short
> > >>> term, just using wait4 would be the simplest and cleanest for us
> > >(same
> > >>> as all other archs, no extra case to deal with), but in the long
> > >term
> > >>> there may be value in having rusage that can represent more than 68
> > >>> cpu-years spent by a process (seems plausible with large numbers of
> > >>> cores).
> > >>
> > >> Based on the feedback from Linus and Eric, the most likely outcome
> > >> at the moment seems to be an extension of waitid() to allow
> > >> P_PGID with id=0 like BSD does, and not bring back wait4() or
> > >> add P_PROCESS_PGID.
> > >>
> > >> So far, I don't think anyone has proposed an actual kernel patch.
> > >> I was hoping that Eric would do it, but I could also send it if he's
> > >> otherwise busy.
> > >
> > >So here is what I am looking at.  It still needs to be tested
> > >and the description needs to be improved so that it properly credits
> > >everyone.  However I think I have the major stroeks correct.
> > >
> > >From: "Eric W. Biederman" <ebiederm@xmission.com>
> > >Date: Tue, 23 Jul 2019 07:44:46 -0500
> > >Subject: [PATCH] waitid: Add support for waiting for the current
> > >process group
> > >
> > >It was recently discovered that the linux version of waitid is not a
> > >superset of the other wait functions because it does not include
> > >support for waiting for the current process group.  This has two
> > >downsides.  An extra system call is needed to get the current process
> > >group, and a signal could come in between the system call that
> > >retrieved the process gorup and the call to waitid that changes the
> > >current process group.
> > >
> > >Allow userspace to avoid both of those issues by defining
> > >idtype == P_PGID and id == 0 to mean wait for the caller's process
> > >group at the time of the call.
> > >
> > >Arguments can be made for using a different choice of idtype and id
> > >for this case but the BSDs already use this P_PGID and 0 to indicate
> > >waiting for the current process's process group.  So be nice to user
> > >space programmers and don't introduce an unnecessary incompatibility.
> > >
> > >Some people have noted that the posix description is that
> > >waitpid will wait for the current process group, and that in
> > >the presence of pthreads that process group can change.  To get
> > >clarity on this issue I looked at XNU, FreeBSD, and Luminos.  All of
> > >those flavors of unix waited for the current process group at the
> > >time of call and as written could not adapt to the process group
> > >changing after the call.
> > >
> > >At one point Linux did adapt to the current process group changing but
> > >that stopped in 161550d74c07 ("pid: sys_wait... fixes").  It has been
> > >over 11 years since Linux has that behavior, no programs that fail
> > >with the change in behavior have been reported, and I could not
> > >find any other unix that does this.  So I think it is safe to clarify
> > >the definition of current process group, to current process group
> > >at the time of the wait function.
> > >
> > >Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
> > >---
> > > kernel/exit.c | 6 ++++--
> > > 1 file changed, 4 insertions(+), 2 deletions(-)
> > >
> > >diff --git a/kernel/exit.c b/kernel/exit.c
> > >index a75b6a7f458a..3d86930f035e 100644
> > >--- a/kernel/exit.c
> > >+++ b/kernel/exit.c
> > >@@ -1577,14 +1577,16 @@ static long kernel_waitid(int which, pid_t
> > >upid, struct waitid_info *infop,
> > >               break;
> > >       case P_PGID:
> > >               type = PIDTYPE_PGID;
> > >-              if (upid <= 0)
> > >+              if (upid < 0)
> > >                       return -EINVAL;
> > >+              if (upid == 0)
> > >+                      pid = get_pid(task_pgrp(current));
> > >               break;
> > >       default:
> > >               return -EINVAL;
> > >       }
> > >
> > >-      if (type < PIDTYPE_MAX)
> > >+      if ((type < PIDTYPE_MAX) && !pid)
> > >               pid = find_get_pid(upid);
> > >
> > >       wo.wo_type      = type;
> >
> > Eric, mind if I send this out alongside the P_PIDFD patchset and put in a test for it?
> 
> Was this ever sent?

It's not upstream. Would be really nice to get this fixed in 5.3 so
that RV32 will not be missing functionality...

Rich
  
Christian Brauner Aug. 14, 2019, 5:07 a.m. UTC | #5
On August 14, 2019 1:11:53 AM GMT+02:00, Rich Felker <dalias@libc.org> wrote:
>On Tue, Aug 13, 2019 at 03:22:02PM -0700, Alistair Francis wrote:
>> On Thu, Jul 25, 2019 at 10:30 AM Christian Brauner
><christian@brauner.io> wrote:
>> >
>> > On July 25, 2019 7:14:05 PM GMT+02:00, ebiederm@xmission.com wrote:
>> > >Arnd Bergmann <arnd@arndb.de> writes:
>> > >
>> > >> On Thu, Jul 25, 2019 at 6:40 AM Rich Felker <dalias@libc.org>
>wrote:
>> > >>> On Wed, Jul 24, 2019 at 05:04:53PM -0700, Alistair Francis
>wrote:
>> > >>> > On Tue, Jul 23, 2019 at 1:45 AM Arnd Bergmann <arnd@arndb.de>
>> > >wrote:
>> > >>> > >
>> > >>> > > Sounds good to me, the debate over what rusage to use
>should not
>> > >hold
>> > >>> > > up the review of the rest of that syscall.
>> > >>> >
>> > >>> > I'm unclear what the final decision is here. What is the
>solution
>> > >are
>> > >>> > we going to have wait4() or add P_PROCESS_PGID to waitid()?
>> > >>> >
>> > >>> > As well as that what is the solution to current
>implementations?
>> > >If we
>> > >>> > add wait4() then there isn't an issue (and I can drop this
>patch)
>> > >but
>> > >>> > if we add P_PROCESS_PGID then we will need a way to handle
>kernels
>> > >>> > with waitid() but no P_PROCESS_PGID. Although my new plan is
>to
>> > >only
>> > >>> > use the waitid syscall if we don't have waitpid or wait4 so
>it
>> > >seems
>> > >>> > like this will only affect RV32 for the time being.
>> > >>>
>> > >>> I would really like some indication which solution will be
>taken,
>> > >>> since it impacts choices that will need to be made in musl very
>> > >soon.
>> > >>> My favorite outcome would be bringing back wait4 for rv32 (and
>> > >>> no-time32 archs in general) *and* adding P_PROCESS_PGID. In the
>> > >short
>> > >>> term, just using wait4 would be the simplest and cleanest for
>us
>> > >(same
>> > >>> as all other archs, no extra case to deal with), but in the
>long
>> > >term
>> > >>> there may be value in having rusage that can represent more
>than 68
>> > >>> cpu-years spent by a process (seems plausible with large
>numbers of
>> > >>> cores).
>> > >>
>> > >> Based on the feedback from Linus and Eric, the most likely
>outcome
>> > >> at the moment seems to be an extension of waitid() to allow
>> > >> P_PGID with id=0 like BSD does, and not bring back wait4() or
>> > >> add P_PROCESS_PGID.
>> > >>
>> > >> So far, I don't think anyone has proposed an actual kernel
>patch.
>> > >> I was hoping that Eric would do it, but I could also send it if
>he's
>> > >> otherwise busy.
>> > >
>> > >So here is what I am looking at.  It still needs to be tested
>> > >and the description needs to be improved so that it properly
>credits
>> > >everyone.  However I think I have the major stroeks correct.
>> > >
>> > >From: "Eric W. Biederman" <ebiederm@xmission.com>
>> > >Date: Tue, 23 Jul 2019 07:44:46 -0500
>> > >Subject: [PATCH] waitid: Add support for waiting for the current
>> > >process group
>> > >
>> > >It was recently discovered that the linux version of waitid is not
>a
>> > >superset of the other wait functions because it does not include
>> > >support for waiting for the current process group.  This has two
>> > >downsides.  An extra system call is needed to get the current
>process
>> > >group, and a signal could come in between the system call that
>> > >retrieved the process gorup and the call to waitid that changes
>the
>> > >current process group.
>> > >
>> > >Allow userspace to avoid both of those issues by defining
>> > >idtype == P_PGID and id == 0 to mean wait for the caller's process
>> > >group at the time of the call.
>> > >
>> > >Arguments can be made for using a different choice of idtype and
>id
>> > >for this case but the BSDs already use this P_PGID and 0 to
>indicate
>> > >waiting for the current process's process group.  So be nice to
>user
>> > >space programmers and don't introduce an unnecessary
>incompatibility.
>> > >
>> > >Some people have noted that the posix description is that
>> > >waitpid will wait for the current process group, and that in
>> > >the presence of pthreads that process group can change.  To get
>> > >clarity on this issue I looked at XNU, FreeBSD, and Luminos.  All
>of
>> > >those flavors of unix waited for the current process group at the
>> > >time of call and as written could not adapt to the process group
>> > >changing after the call.
>> > >
>> > >At one point Linux did adapt to the current process group changing
>but
>> > >that stopped in 161550d74c07 ("pid: sys_wait... fixes").  It has
>been
>> > >over 11 years since Linux has that behavior, no programs that fail
>> > >with the change in behavior have been reported, and I could not
>> > >find any other unix that does this.  So I think it is safe to
>clarify
>> > >the definition of current process group, to current process group
>> > >at the time of the wait function.
>> > >
>> > >Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
>> > >---
>> > > kernel/exit.c | 6 ++++--
>> > > 1 file changed, 4 insertions(+), 2 deletions(-)
>> > >
>> > >diff --git a/kernel/exit.c b/kernel/exit.c
>> > >index a75b6a7f458a..3d86930f035e 100644
>> > >--- a/kernel/exit.c
>> > >+++ b/kernel/exit.c
>> > >@@ -1577,14 +1577,16 @@ static long kernel_waitid(int which, pid_t
>> > >upid, struct waitid_info *infop,
>> > >               break;
>> > >       case P_PGID:
>> > >               type = PIDTYPE_PGID;
>> > >-              if (upid <= 0)
>> > >+              if (upid < 0)
>> > >                       return -EINVAL;
>> > >+              if (upid == 0)
>> > >+                      pid = get_pid(task_pgrp(current));
>> > >               break;
>> > >       default:
>> > >               return -EINVAL;
>> > >       }
>> > >
>> > >-      if (type < PIDTYPE_MAX)
>> > >+      if ((type < PIDTYPE_MAX) && !pid)
>> > >               pid = find_get_pid(upid);
>> > >
>> > >       wo.wo_type      = type;
>> >
>> > Eric, mind if I send this out alongside the P_PIDFD patchset and
>put in a test for it?
>> 
>> Was this ever sent?
>
>It's not upstream. Would be really nice to get this fixed in 5.3 so
>that RV32 will not be missing functionality...
>
>Rich

I'll pick it up as is and send out for review later if I hear no objections.

Christian
  
Christian Brauner Aug. 14, 2019, 11:38 a.m. UTC | #6
From: Christian Brauner <christian.brauner@ubuntu.com>

Hey everyone,

This patch adds support for waiting on the current process group by
specifying waitid(P_PGID, 0, ...) as discussed in [1]. The details why
we need to do this are in the commit message of [PATCH 1/1] so I won't
repeat them here.

I've picked this up since the thread has gone stale and parts of
userspace are actually blocked by this.

Note that the patch has been marked with v1 because I've changed the
patch to be more closely aligned with the P_PIDFD changes to waitid() I
have sitting in my for-next branch (cf. [2]).
This makes the merge conflict a little simpler and picks up on the
coding style discussions that guided the P_PIDFD patchset.

There was some desire to get this feature in with 5.3 (cf. [3]).
But given that this is a new feature for waitid() and for the sake of
avoiding any merge conflicts I would prefer to land this in the 5.4
merge window together with the P_PIDFD changes.

Thanks!
Christian

/* References */
[1]: https://www.sourceware.org/ml/libc-alpha/2019-07/msg00587.html
[2]: https://lore.kernel.org/lkml/20190727222229.6516-1-christian@brauner.io/
[3]: https://www.sourceware.org/ml/libc-alpha/2019-08/msg00304.html

Eric W. Biederman (1):
  waitid: Add support for waiting for the current process group

 kernel/exit.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)
  
Christian Brauner Aug. 14, 2019, 1:07 p.m. UTC | #7
Hey everyone,

This patch adds support for waiting on the current process group by
specifying waitid(P_PGID, 0, ...) as discussed in [1]. The details why
we need to do this are in the commit message of [PATCH 1/1] so I won't
repeat them here.

I've picked this up since the thread has gone stale and parts of
userspace are actually blocked by this.

Note that the patch has been changed to be more closely aligned with the
P_PIDFD changes to waitid() I have sitting in my for-next branch (cf. [2]).
This makes the merge conflict a little simpler and picks up on the
coding style discussions that guided the P_PIDFD patchset.

There was some desire to get this feature in with 5.3 (cf. [3]).
But given that this is a new feature for waitid() and for the sake of
avoiding any merge conflicts I would prefer to land this in the 5.4
merge window together with the P_PIDFD changes.

Thanks!
Christian

/* References */
[1]: https://www.sourceware.org/ml/libc-alpha/2019-07/msg00587.html
[2]: https://lore.kernel.org/lkml/20190727222229.6516-1-christian@brauner.io/
[3]: https://www.sourceware.org/ml/libc-alpha/2019-08/msg00304.html

Eric W. Biederman (1):
  waitid: Add support for waiting for the current process group

 kernel/exit.c | 23 +++++++++++++++++++----
 1 file changed, 19 insertions(+), 4 deletions(-)
  
Christian Brauner Aug. 14, 2019, 3:43 p.m. UTC | #8
Hey everyone,

This patch adds support for waiting on the current process group by
specifying waitid(P_PGID, 0, ...) as discussed in [1]. The details why
we need to do this are in the commit message of [PATCH 1/1] so I won't
repeat them here.

I've picked this up since the thread has gone stale and parts of
userspace are actually blocked by this.

Note that the patch has been changed to be more closely aligned with the
P_PIDFD changes to waitid() I have sitting in my for-next branch (cf. [2]).
This makes the merge conflict a little simpler and picks up on the
coding style discussions that guided the P_PIDFD patchset.

There was some desire to get this feature in with 5.3 (cf. [3]).
But given that this is a new feature for waitid() and for the sake of
avoiding any merge conflicts I would prefer to land this in the 5.4
merge window together with the P_PIDFD changes.

Thanks!
Christian

/* v0 */
Link: https://www.sourceware.org/ml/libc-alpha/2019-07/msg00587.html

/* v1 */
Link: https://lore.kernel.org/lkml/20190814113822.9505-1-christian.brauner@ubuntu.com/

/* v2 */
Link: https://lore.kernel.org/lkml/20190814130732.23572-1-christian.brauner@ubuntu.com/

/* References */
[1]: https://www.sourceware.org/ml/libc-alpha/2019-07/msg00587.html
[2]: https://lore.kernel.org/lkml/20190727222229.6516-1-christian@brauner.io/
[3]: https://www.sourceware.org/ml/libc-alpha/2019-08/msg00304.html

Eric W. Biederman (1):
  waitid: Add support for waiting for the current process group

 kernel/exit.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)
  
Rich Felker Aug. 14, 2019, 3:58 p.m. UTC | #9
On Wed, Aug 14, 2019 at 05:43:59PM +0200, Christian Brauner wrote:
> Hey everyone,
> 
> This patch adds support for waiting on the current process group by
> specifying waitid(P_PGID, 0, ...) as discussed in [1]. The details why
> we need to do this are in the commit message of [PATCH 1/1] so I won't
> repeat them here.
> 
> I've picked this up since the thread has gone stale and parts of
> userspace are actually blocked by this.
> 
> Note that the patch has been changed to be more closely aligned with the
> P_PIDFD changes to waitid() I have sitting in my for-next branch (cf. [2]).
> This makes the merge conflict a little simpler and picks up on the
> coding style discussions that guided the P_PIDFD patchset.
> 
> There was some desire to get this feature in with 5.3 (cf. [3]).
> But given that this is a new feature for waitid() and for the sake of
> avoiding any merge conflicts I would prefer to land this in the 5.4
> merge window together with the P_PIDFD changes.

That makes 5.4 (or later, depending on other stuff) the hard minimum
for RV32 ABI. Is that acceptable? I was under the impression (perhaps
mistaken) that 5.3 was going to be next LTS series which is why I'd
like to have the necessary syscalls for a complete working RV32
userspace in it. If I'm wrong about that please ignore me. :-)

Rich
  
Christian Brauner Aug. 14, 2019, 4:13 p.m. UTC | #10
On Wed, Aug 14, 2019 at 11:58:22AM -0400, Rich Felker wrote:
> On Wed, Aug 14, 2019 at 05:43:59PM +0200, Christian Brauner wrote:
> > Hey everyone,
> > 
> > This patch adds support for waiting on the current process group by
> > specifying waitid(P_PGID, 0, ...) as discussed in [1]. The details why
> > we need to do this are in the commit message of [PATCH 1/1] so I won't
> > repeat them here.
> > 
> > I've picked this up since the thread has gone stale and parts of
> > userspace are actually blocked by this.
> > 
> > Note that the patch has been changed to be more closely aligned with the
> > P_PIDFD changes to waitid() I have sitting in my for-next branch (cf. [2]).
> > This makes the merge conflict a little simpler and picks up on the
> > coding style discussions that guided the P_PIDFD patchset.
> > 
> > There was some desire to get this feature in with 5.3 (cf. [3]).
> > But given that this is a new feature for waitid() and for the sake of
> > avoiding any merge conflicts I would prefer to land this in the 5.4
> > merge window together with the P_PIDFD changes.
> 
> That makes 5.4 (or later, depending on other stuff) the hard minimum
> for RV32 ABI. Is that acceptable? I was under the impression (perhaps
> mistaken) that 5.3 was going to be next LTS series which is why I'd
> like to have the necessary syscalls for a complete working RV32
> userspace in it. If I'm wrong about that please ignore me. :-)

5.3 is not going to be an LTS and we don't do new features after the
merge window is closed anyway. :)

Christian
  

Patch

diff --git a/kernel/exit.c b/kernel/exit.c
index a75b6a7f458a..3d86930f035e 100644
--- a/kernel/exit.c
+++ b/kernel/exit.c
@@ -1577,14 +1577,16 @@  static long kernel_waitid(int which, pid_t upid, struct waitid_info *infop,
 		break;
 	case P_PGID:
 		type = PIDTYPE_PGID;
-		if (upid <= 0)
+		if (upid < 0)
 			return -EINVAL;
+		if (upid == 0)
+			pid = get_pid(task_pgrp(current));
 		break;
 	default:
 		return -EINVAL;
 	}
 
-	if (type < PIDTYPE_MAX)
+	if ((type < PIDTYPE_MAX) && !pid)
 		pid = find_get_pid(upid);
 
 	wo.wo_type	= type;