[RFC,v3,3/3] sys/types.h: Make snseconds_t user visible

Message ID 20211208144757.37641-3-alx.manpages@gmail.com
State Not applicable
Headers
Series sys/types.h: Define new types: [s]nseconds_t |

Commit Message

Alejandro Colomar Dec. 8, 2021, 2:48 p.m. UTC
  Use a type that can be relied upon by users,
for example for creating pointers.

It is backwards compatible, as it is defined to be long whenever it can,
and it makes the underlying type opaque,
since the user never had a need to know what it is.

First of all, this simplifies the implementation,
allowing a different underlying type in kernel and in user space.

The user only needs to know that it can hold [0, 999'999'999],
and it's a signed type.
To print it,
casting to long or to intmax_t (or even int when it's 32-bit) should be safe.
Using long was too specific of a contract with programmers.

Using snseconds_t in user code adds extra readability to the code,
since long is both meaningless and also unnecessarily explicit.

Link: linux-man <https://lore.kernel.org/linux-man/ec1dcc655184f6cdaae40ff8b7970b750434e4ef.1638123425.git.nabijaczleweli@nabijaczleweli.xyz/T/>
Link: glibc <https://sourceware.org/pipermail/libc-alpha/2021-December/133702.html>
Cc: наб <nabijaczleweli@nabijaczleweli.xyz>
Cc: Jakub Wilk <jwilk@jwilk.net>
Cc: Zack Weinberg <zackw@panix.com>
Cc: Stefan Puiu <stefan.puiu@gmail.com>
Cc: Michael Kerrisk (man-pages) <mtk.manpages@gmail.com>
Cc: H.J. Lu <hjl.tools@gmail.com>
Cc: Joseph Myers <joseph@codesourcery.com>
Cc: Rich Felker <dalias@libc.org>
Cc: Andreas Schwab <schwab@linux-m68k.org>
Cc: Paul Eggert <eggert@cs.ucla.edu>
Signed-off-by: Alejandro Colomar <alx.manpages@gmail.com>
---
 posix/sys/types.h | 5 +++++
 1 file changed, 5 insertions(+)
  

Comments

Zack Weinberg Dec. 8, 2021, 2:55 p.m. UTC | #1
On Wed, Dec 8, 2021, at 9:48 AM, Alejandro Colomar wrote:
> 
> +#ifndef __snseconds_t_defined
> +typedef __snseconds_t snseconds_t;
> +# define __snseconds_t_defined
> +#endif

This is the old __need/__defined convention for not repeating typedefs.  Please use a <bits/types/snseconds_t.h> header instead.  (I really gotta find the time to dust off my patch series that finishes the conversion.)

zw
  
Alejandro Colomar Dec. 8, 2021, 3:15 p.m. UTC | #2
On 12/8/21 15:55, Zack Weinberg wrote:
>> +#ifndef __snseconds_t_defined
>> +typedef __snseconds_t snseconds_t;
>> +# define __snseconds_t_defined
>> +#endif
> 
> This is the old __need/__defined convention for not repeating typedefs.  Please use a <bits/types/snseconds_t.h> header instead.  (I really gotta find the time to dust off my patch series that finishes the conversion.)
> 

ack
  
Paul Eggert Dec. 8, 2021, 6:12 p.m. UTC | #3
On 12/8/21 06:48, Alejandro Colomar wrote:
> Use a type that can be relied upon by users,
> for example for creating pointers.

I'm still not seeing the need for a user-visible type "snseconds_t".

In response to the earlier proposal "[RFC v2 1/2] sys/types.h: Define 
new type: snseconds_t", Joseph suggested omitting snseconds_t[1], I 
supported this suggestion[2], and I don't recall seeing any statement 
explaining why this type needs to be user-visible.

This is a separate issue of whether we need a __snseconds_t type at all 
(which is the disagreement that Rich Felker is expressing). Although the 
type __snseconds_t may well be useful for glibc's own purposes, my 
experience is that user code doesn't need a snseconds_t type without the 
leading underscores. Arguably a user-visible snseconds_t type would 
cause more confusion than it'd cure, and so would be a net negative.

[1] https://sourceware.org/pipermail/libc-alpha/2021-December/133782.html
[2] https://sourceware.org/pipermail/libc-alpha/2021-December/133784.html>
  
Rich Felker Dec. 8, 2021, 6:25 p.m. UTC | #4
On Wed, Dec 08, 2021 at 10:12:44AM -0800, Paul Eggert wrote:
> On 12/8/21 06:48, Alejandro Colomar wrote:
> >Use a type that can be relied upon by users,
> >for example for creating pointers.
> 
> I'm still not seeing the need for a user-visible type "snseconds_t".
> 
> In response to the earlier proposal "[RFC v2 1/2] sys/types.h:
> Define new type: snseconds_t", Joseph suggested omitting
> snseconds_t[1], I supported this suggestion[2], and I don't recall
> seeing any statement explaining why this type needs to be
> user-visible.
> 
> This is a separate issue of whether we need a __snseconds_t type at
> all (which is the disagreement that Rich Felker is expressing).
> Although the type __snseconds_t may well be useful for glibc's own
> purposes, my experience is that user code doesn't need a snseconds_t
> type without the leading underscores. Arguably a user-visible
> snseconds_t type would cause more confusion than it'd cure, and so
> would be a net negative.

In discussing this with others in musl community, one thing that came
up is that it's extremely unlikely WG14 will be happy with this
proposed change, even if the Austin Group were, and since C11 the
definition of timespec is in the domain of WG14. I think that makes
the change a non-starter, or at least a source of a huge gratuitous
conflict that's not guaranteed to turn out in a way anybody likes.

I have no strong position on whether __snseconds_t (as a
glibc-internal thing) would be helpful to improve the x32 mess, but
I'd really like to see x32 fixed to conform to the standard.

For what it's worty: when musl first added x32, we had a lot of nasty
x32-local hacks to work around the fact that the kernel was not
prepared to see junk in the padding bits of tv_nsec. After doing all
the time64 work for 32-bit archs, though, it ended up that there were
clean places to do all the translation needed. It might turn out the
same for glibc. Also, provided you have a post-time64 kernel, my
understanding is that x32 already ignores the padding bits, so when
glibc gets to the point where the minimum supported kernel version
passes that, no action at all would be needed to fix x32 except
changing the public type.

Rich
  
Zack Weinberg Dec. 8, 2021, 8:10 p.m. UTC | #5
On Wed, Dec 8, 2021, at 1:25 PM, Rich Felker wrote:
> In discussing this with others in musl community, one thing that came
> up is that it's extremely unlikely WG14 will be happy with this
> proposed change, even if the Austin Group were,

What reasons do you have for making this assessment?

> For what it's worty: when musl first added x32, we had a lot of nasty
> x32-local hacks to work around the fact that the kernel was not
> prepared to see junk in the padding bits of tv_nsec. After doing all
> the time64 work for 32-bit archs, though, it ended up that there were
> clean places to do all the translation needed. It might turn out the
> same for glibc.

It is entirely possible that glibc's use of __syscall_slong_t for tv_nsec on x32 (and several other ABIs) is working around a problem that no longer exists, or that could be pushed out of scope by bumping the minimum supported kernel version.  I do not have a functional testbed for any of the affected ABIs.  I wonder if there's anyone reading this who has such a testbed, who could experiment with a change along the lines of

--- struct_timespec.h.old       2021-12-08 15:04:49.000000001 -0500
+++ struct_timespec.h   2021-12-08 15:08:59.000000001 -0500
@@ -15,18 +15,14 @@
 #else
   __time_t tv_sec;              /* Seconds.  */
 #endif
-#if __WORDSIZE == 64 \
-  || (defined __SYSCALL_WORDSIZE && __SYSCALL_WORDSIZE == 64) \
-  || (__TIMESIZE == 32 && !defined __USE_TIME_BITS64)
-  __syscall_slong_t tv_nsec;    /* Nanoseconds.  */
-#else
-# if __BYTE_ORDER == __BIG_ENDIAN
+#if __WORDSIZE == 32 && (__TIMESIZE == 64 || defined __USE_TIME_BITS64) \
+  && __BYTE_ORDER == __BIG_ENDIAN
   int: 32;           /* Padding.  */
+#endif
   long int tv_nsec;  /* Nanoseconds.  */
-# else
-  long int tv_nsec;  /* Nanoseconds.  */
+#if __WORDSIZE == 32 && (__TIMESIZE == 64 || defined __USE_TIME_BITS64) \
+  && __BYTE_ORDER == __LITTLE_ENDIAN
   int: 32;           /* Padding.  */
-# endif
 #endif
 };

and report back on the minimum kernel version for which this works correctly.

(I would still want the type changed in the standards, but only as futureproofing.)

zw
  
Rich Felker Dec. 8, 2021, 8:34 p.m. UTC | #6
On Wed, Dec 08, 2021 at 03:10:22PM -0500, Zack Weinberg wrote:
> On Wed, Dec 8, 2021, at 1:25 PM, Rich Felker wrote:
> > In discussing this with others in musl community, one thing that came
> > up is that it's extremely unlikely WG14 will be happy with this
> > proposed change, even if the Austin Group were,
> 
> What reasons do you have for making this assessment?

WG14 is generally very slow-moving, conservative about making changes,
and not agreeable to "please change this because we botched something
in our implementation" or even far more reasonable desires for changes
for the sake of maintaining ABI stability. They're also likely not
agreeable to your view that this morally "should be" a typedef because
of being a user-kernel boundary, because from their perspective (also
mine) the fact that there's a user-kernel boundary here is an
implementation detail not anything fundamental.

> > For what it's worty: when musl first added x32, we had a lot of nasty
> > x32-local hacks to work around the fact that the kernel was not
> > prepared to see junk in the padding bits of tv_nsec. After doing all
> > the time64 work for 32-bit archs, though, it ended up that there were
> > clean places to do all the translation needed. It might turn out the
> > same for glibc.
> 
> It is entirely possible that glibc's use of __syscall_slong_t for
> tv_nsec on x32 (and several other ABIs) is working around a problem
> that no longer exists,

It's not so much "working around a problem" as much as matching wrong
choices made on the kernel side at the time x32 was introduced. Linus
was apparently not aware that the API type was long (in the sense of
the C long type in the userspace ABI) and, as I understand it,
insisted the folks doing x32 use the existing 64-bit struct with
64-bit "kernel long" in it. So that's what happened. It would have
worked, even at the time, to just do the matching padding on the
userspace side if there'd been a tiny x32 shim in the kernel, for
syscalls where userspace passes-in a timespec, to zero the upper bits,
but that wasn't done until much later when it was integrated with the
kernel time64 support for 32-bit archs.

> or that could be pushed out of scope by
> bumping the minimum supported kernel version. I do not have a
> functional testbed for any of the affected ABIs. I wonder if there's
> anyone reading this who has such a testbed, who could experiment
> with a change along the lines of
> 
> --- struct_timespec.h.old       2021-12-08 15:04:49.000000001 -0500
> +++ struct_timespec.h   2021-12-08 15:08:59.000000001 -0500
> @@ -15,18 +15,14 @@
>  #else
>    __time_t tv_sec;              /* Seconds.  */
>  #endif
> -#if __WORDSIZE == 64 \
> -  || (defined __SYSCALL_WORDSIZE && __SYSCALL_WORDSIZE == 64) \
> -  || (__TIMESIZE == 32 && !defined __USE_TIME_BITS64)
> -  __syscall_slong_t tv_nsec;    /* Nanoseconds.  */
> -#else
> -# if __BYTE_ORDER == __BIG_ENDIAN
> +#if __WORDSIZE == 32 && (__TIMESIZE == 64 || defined __USE_TIME_BITS64) \
> +  && __BYTE_ORDER == __BIG_ENDIAN
>    int: 32;           /* Padding.  */
> +#endif
>    long int tv_nsec;  /* Nanoseconds.  */
> -# else
> -  long int tv_nsec;  /* Nanoseconds.  */
> +#if __WORDSIZE == 32 && (__TIMESIZE == 64 || defined __USE_TIME_BITS64) \
> +  && __BYTE_ORDER == __LITTLE_ENDIAN
>    int: 32;           /* Padding.  */
> -# endif
>  #endif
>  };
> 
> and report back on the minimum kernel version for which this works correctly.

I'll see if anyone I know has a glibc x32 test environment to do this.

> (I would still want the type changed in the standards, but only as futureproofing.)

https://xkcd.com/927/

Rich
  
Adhemerval Zanella Dec. 8, 2021, 9:12 p.m. UTC | #7
On 08/12/2021 17:10, Zack Weinberg via Libc-alpha wrote:
> On Wed, Dec 8, 2021, at 1:25 PM, Rich Felker wrote:
>> In discussing this with others in musl community, one thing that came
>> up is that it's extremely unlikely WG14 will be happy with this
>> proposed change, even if the Austin Group were,
> 
> What reasons do you have for making this assessment?
> 
>> For what it's worty: when musl first added x32, we had a lot of nasty
>> x32-local hacks to work around the fact that the kernel was not
>> prepared to see junk in the padding bits of tv_nsec. After doing all
>> the time64 work for 32-bit archs, though, it ended up that there were
>> clean places to do all the translation needed. It might turn out the
>> same for glibc.
> 
> It is entirely possible that glibc's use of __syscall_slong_t for tv_nsec on x32 (and several other ABIs) is working around a problem that no longer exists, or that could be pushed out of scope by bumping the minimum supported kernel version.  I do not have a functional testbed for any of the affected ABIs.  I wonder if there's anyone reading this who has such a testbed, who could experiment with a change along the lines of
> 
> --- struct_timespec.h.old       2021-12-08 15:04:49.000000001 -0500
> +++ struct_timespec.h   2021-12-08 15:08:59.000000001 -0500
> @@ -15,18 +15,14 @@
>  #else
>    __time_t tv_sec;              /* Seconds.  */
>  #endif
> -#if __WORDSIZE == 64 \
> -  || (defined __SYSCALL_WORDSIZE && __SYSCALL_WORDSIZE == 64) \
> -  || (__TIMESIZE == 32 && !defined __USE_TIME_BITS64)
> -  __syscall_slong_t tv_nsec;    /* Nanoseconds.  */
> -#else
> -# if __BYTE_ORDER == __BIG_ENDIAN
> +#if __WORDSIZE == 32 && (__TIMESIZE == 64 || defined __USE_TIME_BITS64) \
> +  && __BYTE_ORDER == __BIG_ENDIAN
>    int: 32;           /* Padding.  */
> +#endif
>    long int tv_nsec;  /* Nanoseconds.  */
> -# else
> -  long int tv_nsec;  /* Nanoseconds.  */
> +#if __WORDSIZE == 32 && (__TIMESIZE == 64 || defined __USE_TIME_BITS64) \
> +  && __BYTE_ORDER == __LITTLE_ENDIAN
>    int: 32;           /* Padding.  */
> -# endif
>  #endif
>  };

We can even be more clever and use something similar to what musl does:

diff --git a/time/bits/types/struct_timespec.h b/time/bits/types/struct_timespec.h
index 489e81136d..b962d3f95f 100644
--- a/time/bits/types/struct_timespec.h
+++ b/time/bits/types/struct_timespec.h
@@ -15,19 +15,9 @@ struct timespec
 #else
   __time_t tv_sec;             /* Seconds.  */
 #endif
-#if __WORDSIZE == 64 \
-  || (defined __SYSCALL_WORDSIZE && __SYSCALL_WORDSIZE == 64) \
-  || (__TIMESIZE == 32 && !defined __USE_TIME_BITS64)
-  __syscall_slong_t tv_nsec;   /* Nanoseconds.  */
-#else
-# if __BYTE_ORDER == __BIG_ENDIAN
-  int: 32;           /* Padding.  */
-  long int tv_nsec;  /* Nanoseconds.  */
-# else
-  long int tv_nsec;  /* Nanoseconds.  */
-  int: 32;           /* Padding.  */
-# endif
-#endif
+  int : 8 * (sizeof (time_t) - sizeof (long)) * (__BYTE_ORDER == __BIG_ENDIAN);
+  long int tv_nsec;
+  int : 8 * (sizeof (time_t) - sizeof (long)) * (__BYTE_ORDER == __LITTLE_ENDIAN);
 };
 
 #endif

> 
> and report back on the minimum kernel version for which this works correctly.

The above does not show any regression on a recent 5.13, but I am not sure which
is the minimal version it does work correctly on x32. 

I don't mind bump the minimal kernel version for x32, however it would be good
to check with x86 maintainers.

> 
> (I would still want the type changed in the standards, but only as futureproofing.)
> 
> zw
>
  
Alejandro Colomar Dec. 8, 2021, 9:53 p.m. UTC | #8
Hi Adhemerval, Paul,

On 12/8/21 22:12, Adhemerval Zanella wrote:
> We can even be more clever and use something similar to what musl does:
> 
> diff --git a/time/bits/types/struct_timespec.h b/time/bits/types/struct_timespec.h
> index 489e81136d..b962d3f95f 100644
> --- a/time/bits/types/struct_timespec.h
> +++ b/time/bits/types/struct_timespec.h
> @@ -15,19 +15,9 @@ struct timespec
>   #else
>     __time_t tv_sec;             /* Seconds.  */
>   #endif
> -#if __WORDSIZE == 64 \
> -  || (defined __SYSCALL_WORDSIZE && __SYSCALL_WORDSIZE == 64) \
> -  || (__TIMESIZE == 32 && !defined __USE_TIME_BITS64)
> -  __syscall_slong_t tv_nsec;   /* Nanoseconds.  */
> -#else
> -# if __BYTE_ORDER == __BIG_ENDIAN
> -  int: 32;           /* Padding.  */
> -  long int tv_nsec;  /* Nanoseconds.  */
> -# else
> -  long int tv_nsec;  /* Nanoseconds.  */
> -  int: 32;           /* Padding.  */
> -# endif
> -#endif
> +  int : 8 * (sizeof (time_t) - sizeof (long)) * (__BYTE_ORDER == __BIG_ENDIAN);
> +  long int tv_nsec;
> +  int : 8 * (sizeof (time_t) - sizeof (long)) * (__BYTE_ORDER == __LITTLE_ENDIAN);
>   };
>   
>   #endif
> 
>>
>> and report back on the minimum kernel version for which this works correctly.
> 
> The above does not show any regression on a recent 5.13, but I am not sure which
> is the minimal version it does work correctly on x32.
> 
> I don't mind bump the minimal kernel version for x32, however it would be good
> to check with x86 maintainers.

If that works, this seems to me a clean workaround to the X32 bug (IMO 
both spec and kernel bug, in others' opinions kernel bug, in others' 
opinions spec bug, but we agree there's a bug).  We simply forget the 
bug ever existed, and hope no one triggers it again in a future 
implementation.  In the manual page we can add a small bug note for past 
implementations, noting that it doesn't apply anymore; instead of a 
prominent note that some implementations are currently buggy, which 
would be much worse.

> 
>>
>> (I would still want the type changed in the standards, but only as futureproofing.)
>>

Regarding this, I completely agree with Zack, that this is a bug spec, 
and since the standards simply wrote in stone existing practice, we can 
only conclude that existing practice wasn't ideal.  Having a contract so 
specific about the type of tv_nsec is unnecessary and in general C has 
moved in the other direction: use int or typedefs almost everywhere; 
rare are the interfaces that use long (and even more rarely or never 
short), and usually justified.  This case is just an accident of history.

I also see the value of keeping some accidents just for the sake of not 
accidentally making bigger accidents, and C has traditionally done this, 
so I could live with 'long' for tv_nsec, even if it is wrong.

In this case, in principle, snseconds_t would be backwards compatible, 
and it would allow future implementations (say 100 years from now?) to 
move to int if needed (maybe for ABI stability, maybe for other future 
reasons that we cannot foresee), or would allow to keep it defined as 
long forever if API stability is god.

But if you disagree with that, the above changes seem reasonable and 
enough to me.

Paul:

On 12/8/21 19:12, Paul Eggert wrote:
 >
 > I'm still not seeing the need for a user-visible type "snseconds_t".
 >

I hope Zack and I made our reasons visible to you in our latest (and 
this) emails.

 > In response to the earlier proposal "[RFC v2 1/2] sys/types.h: Define
 > new type: snseconds_t", Joseph suggested omitting snseconds_t[1], I
 > supported this suggestion[2], and I don't recall seeing any statement
 > explaining why this type needs to be user-visible.

Joseph suggested omitting it, and also suggested that if making it 
visible, do it in a separate patch for a separate discussion, and that's 
exactly what I did.

I enjoy these discussions, even when I know beforehand that my proposed 
changes will be rejected, because I (and also the other part) usually 
learn something, and hopefully fix other (related or unrelated) things 
that help all of us.  In this case we learnt some better code from musl, 
and cleaned up some mess of code (or are in the path to do so).

So please don't feel ignored in your rejection; I took it into account :)

 >
 > This is a separate issue of whether we need a __snseconds_t type at all
 > (which is the disagreement that Rich Felker is expressing). Although the
 > type __snseconds_t may well be useful for glibc's own purposes, my
 > experience is that user code doesn't need a snseconds_t type without the
 > leading underscores. Arguably a user-visible snseconds_t type would
 > cause more confusion than it'd cure, and so would be a net negative.

Yes, and that's why in v3 they are in different patches, although since 
they have some relation, they are still in the same patch set.

Thanks,
Alex
  
Adhemerval Zanella Dec. 9, 2021, 7:20 p.m. UTC | #9
On 08/12/2021 18:53, Alejandro Colomar (man-pages) wrote:
> Hi Adhemerval, Paul,
> 
> On 12/8/21 22:12, Adhemerval Zanella wrote:
>> We can even be more clever and use something similar to what musl does:
>>
>> diff --git a/time/bits/types/struct_timespec.h b/time/bits/types/struct_timespec.h
>> index 489e81136d..b962d3f95f 100644
>> --- a/time/bits/types/struct_timespec.h
>> +++ b/time/bits/types/struct_timespec.h
>> @@ -15,19 +15,9 @@ struct timespec
>>   #else
>>     __time_t tv_sec;             /* Seconds.  */
>>   #endif
>> -#if __WORDSIZE == 64 \
>> -  || (defined __SYSCALL_WORDSIZE && __SYSCALL_WORDSIZE == 64) \
>> -  || (__TIMESIZE == 32 && !defined __USE_TIME_BITS64)
>> -  __syscall_slong_t tv_nsec;   /* Nanoseconds.  */
>> -#else
>> -# if __BYTE_ORDER == __BIG_ENDIAN
>> -  int: 32;           /* Padding.  */
>> -  long int tv_nsec;  /* Nanoseconds.  */
>> -# else
>> -  long int tv_nsec;  /* Nanoseconds.  */
>> -  int: 32;           /* Padding.  */
>> -# endif
>> -#endif
>> +  int : 8 * (sizeof (time_t) - sizeof (long)) * (__BYTE_ORDER == __BIG_ENDIAN);
>> +  long int tv_nsec;
>> +  int : 8 * (sizeof (time_t) - sizeof (long)) * (__BYTE_ORDER == __LITTLE_ENDIAN);
>>   };
>>     #endif
>>
>>>
>>> and report back on the minimum kernel version for which this works correctly.
>>
>> The above does not show any regression on a recent 5.13, but I am not sure which
>> is the minimal version it does work correctly on x32.
>>
>> I don't mind bump the minimal kernel version for x32, however it would be good
>> to check with x86 maintainers.
> 
> If that works, this seems to me a clean workaround to the X32 bug (IMO both spec and kernel bug, in others' opinions kernel bug, in others' opinions spec bug, but we agree there's a bug).  We simply forget the bug ever existed, and hope no one triggers it again in a future implementation.  In the manual page we can add a small bug note for past implementations, noting that it doesn't apply anymore; instead of a prominent note that some implementations are currently buggy, which would be much worse.

It is clearly a bug [1], which was initially closed as not-implement but
I think we should fix it anyway.  I have sent a proposed fix [2], which
should cover old kernels that do not sanitize the high bits.

[1] https://sourceware.org/bugzilla/show_bug.cgi?id=16437
[2] https://sourceware.org/pipermail/libc-alpha/2021-December/133919.html

> 
>>
>>>
>>> (I would still want the type changed in the standards, but only as futureproofing.)
>>>
> 
> Regarding this, I completely agree with Zack, that this is a bug spec, and since the standards simply wrote in stone existing practice, we can only conclude that existing practice wasn't ideal.  Having a contract so specific about the type of tv_nsec is unnecessary and in general C has moved in the other direction: use int or typedefs almost everywhere; rare are the interfaces that use long (and even more rarely or never short), and usually justified.  This case is just an accident of history.
> 
> I also see the value of keeping some accidents just for the sake of not accidentally making bigger accidents, and C has traditionally done this, so I could live with 'long' for tv_nsec, even if it is wrong.
> 
> In this case, in principle, snseconds_t would be backwards compatible, and it would allow future implementations (say 100 years from now?) to move to int if needed (maybe for ABI stability, maybe for other future reasons that we cannot foresee), or would allow to keep it defined as long forever if API stability is god.
> 
> But if you disagree with that, the above changes seem reasonable and enough to me.

Sorry but I still see no reason to push for snseconds_t, nor __snseconds_t.
The *main* motivation is clearly a *bad* decision to use a non-defined type 
on x32 for the sake of avoid fixing it on glibc (which I give you back then 
would require a lot of code changes).  Now that we are push for more code 
unification, the possible fix is not that complex.

Also the type is already futureproof, unless you have a future ABI that defined
long with less than 32-bit (which I think it will hit another issues than this
one).  Different time_t, the tv_nsec is really bounded by hard constraint (it
need to hold nanoseconds, instead of an unbounded time in future).

> 
> Paul:
> 
> On 12/8/21 19:12, Paul Eggert wrote:
>>
>> I'm still not seeing the need for a user-visible type "snseconds_t".
>>
> 
> I hope Zack and I made our reasons visible to you in our latest (and this) emails.
> 
>> In response to the earlier proposal "[RFC v2 1/2] sys/types.h: Define
>> new type: snseconds_t", Joseph suggested omitting snseconds_t[1], I
>> supported this suggestion[2], and I don't recall seeing any statement
>> explaining why this type needs to be user-visible.
> 
> Joseph suggested omitting it, and also suggested that if making it visible, do it in a separate patch for a separate discussion, and that's exactly what I did.
> 
> I enjoy these discussions, even when I know beforehand that my proposed changes will be rejected, because I (and also the other part) usually learn something, and hopefully fix other (related or unrelated) things that help all of us.  In this case we learnt some better code from musl, and cleaned up some mess of code (or are in the path to do so).
> 
> So please don't feel ignored in your rejection; I took it into account :)
> 
>>
>> This is a separate issue of whether we need a __snseconds_t type at all
>> (which is the disagreement that Rich Felker is expressing). Although the
>> type __snseconds_t may well be useful for glibc's own purposes, my
>> experience is that user code doesn't need a snseconds_t type without the
>> leading underscores. Arguably a user-visible snseconds_t type would
>> cause more confusion than it'd cure, and so would be a net negative.
> 
> Yes, and that's why in v3 they are in different patches, although since they have some relation, they are still in the same patch set.
> 
> Thanks,
> Alex
> 
>
  
Paul Eggert Dec. 9, 2021, 7:42 p.m. UTC | #10
On 12/8/21 13:12, Adhemerval Zanella wrote:
> +  int : 8 * (sizeof (time_t) - sizeof (long)) * (__BYTE_ORDER == __BIG_ENDIAN);
> +  long int tv_nsec;
> +  int : 8 * (sizeof (time_t) - sizeof (long)) * (__BYTE_ORDER == __LITTLE_ENDIAN);

Nice idea, and we can simplify things more: we don't need that last line 
as the compiler will insert that padding for us on all glibc platforms 
that need it. Also, the code should use 'long int' consistently. 
Further, there's no reason for struct timespec to mention __time64_t or 
__time_t; it can just use time_t consistently.

Something like the attached, say.
  
Adhemerval Zanella Dec. 9, 2021, 7:52 p.m. UTC | #11
On 09/12/2021 16:42, Paul Eggert wrote:
> On 12/8/21 13:12, Adhemerval Zanella wrote:
>> +  int : 8 * (sizeof (time_t) - sizeof (long)) * (__BYTE_ORDER == __BIG_ENDIAN);
>> +  long int tv_nsec;
>> +  int : 8 * (sizeof (time_t) - sizeof (long)) * (__BYTE_ORDER == __LITTLE_ENDIAN);
> 
> Nice idea, and we can simplify things more: we don't need that last line as the compiler will insert that padding for us on all glibc platforms that need it.

I think it does hurt to be explicit here.

> Also, the code should use 'long int' consistently. Further, there's no reason for struct timespec to mention __time64_t or __time_t; it can just use time_t consistently.> 
> Something like the attached, say.

Florian raised zero-width bitfields might not be ABI safe [1],
so I am using Zack suggestion:

struct timespec
{
  time_t tv_sec;                /* Seconds.  */
#endif
#if __WORDSIZE == 32 && (__TIMESIZE == 64 || defined __USE_TIME_BITS64) \
  && __BYTE_ORDER == __BIG_ENDIAN
  int: 32;                      /* Padding.  */
#endif
  long int tv_nsec;             /* Nanoseconds.  */
#if __WORDSIZE == 32 && (__TIMESIZE == 64 || defined __USE_TIME_BITS64) \
  && __BYTE_ORDER == __LITTLE_ENDIAN
  int: 32;                      /* Padding.  */
#endif
};

[1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=102024
  
Joseph Myers Dec. 9, 2021, 8:13 p.m. UTC | #12
On Thu, 9 Dec 2021, Paul Eggert wrote:

> Nice idea, and we can simplify things more: we don't need that last line as
> the compiler will insert that padding for us on all glibc platforms that need
> it. Also, the code should use 'long int' consistently. Further, there's no

On 32-bit x86, with _TIME_BITS=64, the structure is meant to have trailing 
padding, but won't unless it's explicit (the alignment of long long in 
structures is 4 bytes).
  
Rich Felker Dec. 9, 2021, 8:17 p.m. UTC | #13
On Thu, Dec 09, 2021 at 08:13:53PM +0000, Joseph Myers wrote:
> On Thu, 9 Dec 2021, Paul Eggert wrote:
> 
> > Nice idea, and we can simplify things more: we don't need that last line as
> > the compiler will insert that padding for us on all glibc platforms that need
> > it. Also, the code should use 'long int' consistently. Further, there's no
> 
> On 32-bit x86, with _TIME_BITS=64, the structure is meant to have trailing 
> padding, but won't unless it's explicit (the alignment of long long in 
> structures is 4 bytes).

To be clear: omitting the explicit padding here would be an ABI break
*and* a buffer overflow, since the kernel will write 16 bytes but the
struct would only be 12 bytes. Thus the explicit padding is mandatory.

At least x86 (32-bit) and m68k are affected. I think there are a few
other obscure archs without 8-byte alignment (maybe or1k or
microblaze?) that might be too but I'm not sure of whether any of them
are supported by glibc.

Rich
  
Alejandro Colomar Dec. 9, 2021, 8:23 p.m. UTC | #14
Hi Paul,

On 12/9/21 20:42, Paul Eggert wrote:
> On 12/8/21 13:12, Adhemerval Zanella wrote:
>> +  int : 8 * (sizeof (time_t) - sizeof (long)) * (__BYTE_ORDER == 
>> __BIG_ENDIAN);
>> +  long int tv_nsec;
>> +  int : 8 * (sizeof (time_t) - sizeof (long)) * (__BYTE_ORDER == 
>> __LITTLE_ENDIAN);
> 
> Nice idea, and we can simplify things more: we don't need that last line 
> as the compiler will insert that padding for us on all glibc platforms 
> that need it. Also, the code should use 'long int' consistently. 
> Further, there's no reason for struct timespec to mention __time64_t or 
> __time_t; it can just use time_t consistently.

There are a few headers that POSIX says shall define timespec but it 
doesn't mention time_t in them, and AFAIR (I may remember wrongly), a 
header can't define a type if the standard doesn't say it's defined there.

The list is:

<aio.h>, <mqueue.h>, and <signal.h>.

See timespec and time_t on system_data_types(7).


Thanks,
Alex
  
Joseph Myers Dec. 9, 2021, 8:29 p.m. UTC | #15
On Thu, 9 Dec 2021, Alejandro Colomar (man-pages) via Libc-alpha wrote:

> There are a few headers that POSIX says shall define timespec but it doesn't
> mention time_t in them, and AFAIR (I may remember wrongly), a header can't
> define a type if the standard doesn't say it's defined there.

In POSIX, unlike in ISO C, *_t is reserved in all headers.

In general we've moved towards defining POSIX *_t types for all POSIX 
headers the standard specifies to include some declaration involving those 
types, whether or not the standard requires that *_t name to be defined in 
that header.  (Newer POSIX versions mostly tend to require such *_t types 
to be defined in headers that use them; older POSIX versions have fewer 
such requirements, but still have the *_t reservation.)
  
Alejandro Colomar Dec. 9, 2021, 8:34 p.m. UTC | #16
Hi Joseph,

On 12/9/21 21:29, Joseph Myers wrote:
> On Thu, 9 Dec 2021, Alejandro Colomar (man-pages) via Libc-alpha wrote:
> 
>> There are a few headers that POSIX says shall define timespec but it doesn't
>> mention time_t in them, and AFAIR (I may remember wrongly), a header can't
>> define a type if the standard doesn't say it's defined there.
> 
> In POSIX, unlike in ISO C, *_t is reserved in all headers.
> 
> In general we've moved towards defining POSIX *_t types for all POSIX
> headers the standard specifies to include some declaration involving those
> types, whether or not the standard requires that *_t name to be defined in
> that header.  (Newer POSIX versions mostly tend to require such *_t types
> to be defined in headers that use them; older POSIX versions have fewer
> such requirements, but still have the *_t reservation.)
> 

Ahh, thanks!

On 12/9/21 21:23, Alejandro Colomar (man-pages) wrote:
 > <aio.h>, <mqueue.h>, and <signal.h>.

However... time_t is in ISO C, and <signal.h> is in ISO C too.  That may 
be a problem?

Cheers,
Alex
  
Joseph Myers Dec. 9, 2021, 8:40 p.m. UTC | #17
On Thu, 9 Dec 2021, Alejandro Colomar (man-pages) via Libc-alpha wrote:

> On 12/9/21 21:23, Alejandro Colomar (man-pages) wrote:
> > <aio.h>, <mqueue.h>, and <signal.h>.
> 
> However... time_t is in ISO C, and <signal.h> is in ISO C too.  That may be a
> problem?

ISO C <signal.h> doesn't include struct timespec, so there is no problem 
there.  We only expect to define *_t types that are actually used by some 
other declaration present in the header for the chosen standard 
(__USE_POSIX199309 is the condition for struct timespec in glibc 
<signal.h>).
  

Patch

diff --git a/posix/sys/types.h b/posix/sys/types.h
index 477a45f4af..dae71f92b7 100644
--- a/posix/sys/types.h
+++ b/posix/sys/types.h
@@ -140,6 +140,11 @@  typedef __suseconds_t suseconds_t;
 # endif
 #endif
 
+#ifndef __snseconds_t_defined
+typedef __snseconds_t snseconds_t;
+# define __snseconds_t_defined
+#endif
+
 #define	__need_size_t
 #include <stddef.h>