diff mbox

[BZ,#19679,x86_64] Set DL_RUNIME_UNALIGNED_VEC_SIZE to 8

Message ID CAMe9rOr9ewbZEj_AY_pLv8GUWju=ZxVNbOgQ1FxO_6Z5vL8EXQ@mail.gmail.com
State New, archived
Headers show

Commit Message

H.J. Lu Feb. 19, 2016, 11:49 p.m. UTC
On Fri, Feb 19, 2016 at 2:54 PM, Carlos O'Donell <carlos@redhat.com> wrote:
> On 02/19/2016 05:09 PM, H.J. Lu wrote:
>> Tested on x86-64.  Any comments?
>
> Why isn't this DL_RUN*T*IME_UNALIGNED_VEC_SIZE.
>
> Typos in macro interfaces are the whole reason
> we added -Wundef. This looks very very typo
> prone, I always want to type "DL_RUNTIME_..."
>
>> H.J.
>> ---
>>       [BZ #19679]
>>       * sysdeps/x86_64/dl-trampoline.S (DL_RUNIME_UNALIGNED_VEC_SIZE):
>>       Set to 8.
>> ---
>>  sysdeps/x86_64/dl-trampoline.S | 7 +++++--
>>  1 file changed, 5 insertions(+), 2 deletions(-)
>>
>> diff --git a/sysdeps/x86_64/dl-trampoline.S b/sysdeps/x86_64/dl-trampoline.S
>> index 9fb6b13..0a480b6 100644
>> --- a/sysdeps/x86_64/dl-trampoline.S
>> +++ b/sysdeps/x86_64/dl-trampoline.S
>> @@ -34,8 +34,11 @@
>>  #endif
>>
>>  #ifndef DL_RUNIME_UNALIGNED_VEC_SIZE
>> -/* The maximum size of unaligned vector load and store.  */
>> -# define DL_RUNIME_UNALIGNED_VEC_SIZE 16
>> +/* The maximum size of unaligned vector load and store in the dynamic
>
> The maximum size *in bytes* ...
>
>> +   linker.  Since SSE optimized memory/string functions with aligned
>> +   SSE register load and store are used in the dynamic linker, we must
>> +   set this to 8.  */
>> +# define DL_RUNIME_UNALIGNED_VEC_SIZE 8
>
> OK.
>
>>  #endif
>>
>>  /* True if _dl_runtime_resolve should align stack to VEC_SIZE bytes.  */
>>
>
> OK if you:
> * Fix the name to be DL_RUNTIME_*
> * Adjust the comment to use a unit of size.

This is what I checked in.

Thanks.

Comments

Florian Weimer Feb. 20, 2016, 8:38 a.m. UTC | #1
On 02/20/2016 12:49 AM, H.J. Lu wrote:
> +#ifndef DL_RUNTIME_UNALIGNED_VEC_SIZE
> +/* The maximum size in bytes of unaligned vector load and store in the
> +   dynamic linker.  Since SSE optimized memory/string functions with
> +   aligned SSE register load and store are used in the dynamic linker,
> +   we must set this to 8 so that _dl_runtime_resolve_sse will align the
> +   stack before calling _dl_fixup.  */
> +# define DL_RUNTIME_UNALIGNED_VEC_SIZE 8

The comment doesn't really explain the situation.  If all programs
actually fallowed the psABI, we wouldn't need this.

I think it's easier at this point to change the psABI to say that
__tls_getaddr can be called with an 8-byte-aligned stack.

And DL_RUNTIME_UNALIGNED_VEC_SIZE etc. can be removed.

Florian
Andreas Schwab Feb. 20, 2016, 8:44 a.m. UTC | #2
"H.J. Lu" <hjl.tools@gmail.com> writes:

> -#ifndef DL_RUNIME_UNALIGNED_VEC_SIZE
> -/* The maximum size of unaligned vector load and store.  */
> -# define DL_RUNIME_UNALIGNED_VEC_SIZE 16
> +#ifndef DL_RUNTIME_UNALIGNED_VEC_SIZE
> +/* The maximum size in bytes of unaligned vector load and store in the
> +   dynamic linker.  Since SSE optimized memory/string functions with
> +   aligned SSE register load and store are used in the dynamic linker,
> +   we must set this to 8 so that _dl_runtime_resolve_sse will align the
> +   stack before calling _dl_fixup.  */
> +# define DL_RUNTIME_UNALIGNED_VEC_SIZE 8
>  #endif

Why isn't DL_RUNTIME_UNALIGNED_VEC_SIZE unconditionally defined?

Andreas.
Markus Trippelsdorf Feb. 20, 2016, 9:16 a.m. UTC | #3
On 2016.02.20 at 09:38 +0100, Florian Weimer wrote:
> On 02/20/2016 12:49 AM, H.J. Lu wrote:
> > +#ifndef DL_RUNTIME_UNALIGNED_VEC_SIZE
> > +/* The maximum size in bytes of unaligned vector load and store in the
> > +   dynamic linker.  Since SSE optimized memory/string functions with
> > +   aligned SSE register load and store are used in the dynamic linker,
> > +   we must set this to 8 so that _dl_runtime_resolve_sse will align the
> > +   stack before calling _dl_fixup.  */
> > +# define DL_RUNTIME_UNALIGNED_VEC_SIZE 8
> 
> The comment doesn't really explain the situation.  If all programs
> actually fallowed the psABI, we wouldn't need this.
> 
> I think it's easier at this point to change the psABI to say that
> __tls_getaddr can be called with an 8-byte-aligned stack.

I think this would be an overreaction. 

It is actually quite hard to hit the issue in practice. You would have
to build glibc with gcc-6 and -O3, to get the SSE instructions that
require proper alignment in the dynamic linker.
And then you have to use it with an old and buggy libstc++ from
gcc-4.9.3.

One could argue that it isn't really the job of the dynamic linker to
fix unaligned stacks.
Florian Weimer Feb. 20, 2016, 12:05 p.m. UTC | #4
On 02/20/2016 10:16 AM, Markus Trippelsdorf wrote:
> On 2016.02.20 at 09:38 +0100, Florian Weimer wrote:
>> On 02/20/2016 12:49 AM, H.J. Lu wrote:
>>> +#ifndef DL_RUNTIME_UNALIGNED_VEC_SIZE
>>> +/* The maximum size in bytes of unaligned vector load and store in the
>>> +   dynamic linker.  Since SSE optimized memory/string functions with
>>> +   aligned SSE register load and store are used in the dynamic linker,
>>> +   we must set this to 8 so that _dl_runtime_resolve_sse will align the
>>> +   stack before calling _dl_fixup.  */
>>> +# define DL_RUNTIME_UNALIGNED_VEC_SIZE 8
>>
>> The comment doesn't really explain the situation.  If all programs
>> actually fallowed the psABI, we wouldn't need this.
>>
>> I think it's easier at this point to change the psABI to say that
>> __tls_getaddr can be called with an 8-byte-aligned stack.
> 
> I think this would be an overreaction. 

I don't know.  If glibc will keep this compatibility code indefinitely,
it's easy for other toolchains to make the same mistake.

> It is actually quite hard to hit the issue in practice. You would have
> to build glibc with gcc-6 and -O3, to get the SSE instructions that
> require proper alignment in the dynamic linker.

We will gradually get more SSE instructions in glibc, including the
dynamic linker.

> And then you have to use it with an old and buggy libstc++ from
> gcc-4.9.3.

I assume that could happen with TLS support in binaries from older
compilers.

> One could argue that it isn't really the job of the dynamic linker to
> fix unaligned stacks.

Correct, so why are we doing it?  It would be pretty drastic to tell
users to throw away their old binaries because they were built with a
buggy toolchain.

Florian
H.J. Lu Feb. 22, 2016, 3:52 p.m. UTC | #5
On Sat, Feb 20, 2016 at 4:05 AM, Florian Weimer <fweimer@redhat.com> wrote:
> On 02/20/2016 10:16 AM, Markus Trippelsdorf wrote:
>> On 2016.02.20 at 09:38 +0100, Florian Weimer wrote:
>>> On 02/20/2016 12:49 AM, H.J. Lu wrote:
>>>> +#ifndef DL_RUNTIME_UNALIGNED_VEC_SIZE
>>>> +/* The maximum size in bytes of unaligned vector load and store in the
>>>> +   dynamic linker.  Since SSE optimized memory/string functions with
>>>> +   aligned SSE register load and store are used in the dynamic linker,
>>>> +   we must set this to 8 so that _dl_runtime_resolve_sse will align the
>>>> +   stack before calling _dl_fixup.  */
>>>> +# define DL_RUNTIME_UNALIGNED_VEC_SIZE 8
>>>
>>> The comment doesn't really explain the situation.  If all programs
>>> actually fallowed the psABI, we wouldn't need this.
>>>
>>> I think it's easier at this point to change the psABI to say that
>>> __tls_getaddr can be called with an 8-byte-aligned stack.
>>
>> I think this would be an overreaction.
>
> I don't know.  If glibc will keep this compatibility code indefinitely,
> it's easy for other toolchains to make the same mistake.
>

On one hand, we are realigning stack in glibc anyway, may be forever,
and compiler doesn't need to keep stack aligned to 16 bytes when calling
__tls_getaddr.  Also we realign stack in AVX and AVX512 machines
anyway and TLS isn't completely specified in the x86-64 psABI.

On the other hand, it is odd to change psABI after 10 years just because
one compiler has a bug in its TLS implementation.

If we change the x86-64 psABI for this, I will update i386 psABI specify
that __tls_getaddr has 4-byte incoming stack alignment.

FYI, I am backporting it to 2.23 branch.
diff mbox

Patch

From 8d9c92017d85f23ba6a2b3614b2f2bcf1820d6f0 Mon Sep 17 00:00:00 2001
From: "H.J. Lu" <hjl.tools@gmail.com>
Date: Fri, 19 Feb 2016 15:43:45 -0800
Subject: [PATCH] [x86_64] Set DL_RUNTIME_UNALIGNED_VEC_SIZE to 8

Due to GCC bug:

   https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58066

__tls_get_addr may be called with 8-byte stack alignment.  Although
this bug has been fixed in GCC 4.9.4, 5.3 and 6, we can't assume
that stack will be always aligned at 16 bytes.  Since SSE optimized
memory/string functions with aligned SSE register load and store are
used in the dynamic linker, we must set DL_RUNTIME_UNALIGNED_VEC_SIZE
to 8 so that _dl_runtime_resolve_sse will align the stack before
calling _dl_fixup:

Dump of assembler code for function _dl_runtime_resolve_sse:
   0x00007ffff7deea90 <+0>:	push   %rbx
   0x00007ffff7deea91 <+1>:	mov    %rsp,%rbx
   0x00007ffff7deea94 <+4>:	and    $0xfffffffffffffff0,%rsp
                                ^^^^^^^^^^^ Align stack to 16 bytes
   0x00007ffff7deea98 <+8>:	sub    $0x100,%rsp
   0x00007ffff7deea9f <+15>:	mov    %rax,0xc0(%rsp)
   0x00007ffff7deeaa7 <+23>:	mov    %rcx,0xc8(%rsp)
   0x00007ffff7deeaaf <+31>:	mov    %rdx,0xd0(%rsp)
   0x00007ffff7deeab7 <+39>:	mov    %rsi,0xd8(%rsp)
   0x00007ffff7deeabf <+47>:	mov    %rdi,0xe0(%rsp)
   0x00007ffff7deeac7 <+55>:	mov    %r8,0xe8(%rsp)
   0x00007ffff7deeacf <+63>:	mov    %r9,0xf0(%rsp)
   0x00007ffff7deead7 <+71>:	movaps %xmm0,(%rsp)
   0x00007ffff7deeadb <+75>:	movaps %xmm1,0x10(%rsp)
   0x00007ffff7deeae0 <+80>:	movaps %xmm2,0x20(%rsp)
   0x00007ffff7deeae5 <+85>:	movaps %xmm3,0x30(%rsp)
   0x00007ffff7deeaea <+90>:	movaps %xmm4,0x40(%rsp)
   0x00007ffff7deeaef <+95>:	movaps %xmm5,0x50(%rsp)
   0x00007ffff7deeaf4 <+100>:	movaps %xmm6,0x60(%rsp)
   0x00007ffff7deeaf9 <+105>:	movaps %xmm7,0x70(%rsp)

	[BZ #19679]
	* sysdeps/x86_64/dl-trampoline.S (DL_RUNIME_UNALIGNED_VEC_SIZE):
	Renamed to ...
	(DL_RUNTIME_UNALIGNED_VEC_SIZE): This.  Set to 8.
	(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
	(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.  Updated.
	(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
	(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.
	* sysdeps/x86_64/dl-trampoline.h
	(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
	(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.
---
 ChangeLog                      | 14 ++++++++++++++
 sysdeps/x86_64/dl-trampoline.S | 20 ++++++++++++--------
 sysdeps/x86_64/dl-trampoline.h |  6 +++---
 3 files changed, 29 insertions(+), 11 deletions(-)

diff --git a/ChangeLog b/ChangeLog
index 1ed0e7b..db05bdc 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,17 @@ 
+2016-02-19  H.J. Lu  <hongjiu.lu@intel.com>
+
+	[BZ #19679]
+	* sysdeps/x86_64/dl-trampoline.S (DL_RUNIME_UNALIGNED_VEC_SIZE):
+	Renamed to ...
+	(DL_RUNTIME_UNALIGNED_VEC_SIZE): This.  Set to 8.
+	(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
+	(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.  Updated.
+	(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
+	(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.
+	* sysdeps/x86_64/dl-trampoline.h
+	(DL_RUNIME_RESOLVE_REALIGN_STACK): Renamed to ...
+	(DL_RUNTIME_RESOLVE_REALIGN_STACK): This.
+
 2016-02-19  Mark Wielaard  <mjw@redhat.com>
 
 	* elf/elf.h: Add NT_ARM_SYSTEM_CALL.
diff --git a/sysdeps/x86_64/dl-trampoline.S b/sysdeps/x86_64/dl-trampoline.S
index 9fb6b13..39b8771 100644
--- a/sysdeps/x86_64/dl-trampoline.S
+++ b/sysdeps/x86_64/dl-trampoline.S
@@ -33,15 +33,19 @@ 
 # define DL_STACK_ALIGNMENT 8
 #endif
 
-#ifndef DL_RUNIME_UNALIGNED_VEC_SIZE
-/* The maximum size of unaligned vector load and store.  */
-# define DL_RUNIME_UNALIGNED_VEC_SIZE 16
+#ifndef DL_RUNTIME_UNALIGNED_VEC_SIZE
+/* The maximum size in bytes of unaligned vector load and store in the
+   dynamic linker.  Since SSE optimized memory/string functions with
+   aligned SSE register load and store are used in the dynamic linker,
+   we must set this to 8 so that _dl_runtime_resolve_sse will align the
+   stack before calling _dl_fixup.  */
+# define DL_RUNTIME_UNALIGNED_VEC_SIZE 8
 #endif
 
 /* True if _dl_runtime_resolve should align stack to VEC_SIZE bytes.  */
-#define DL_RUNIME_RESOLVE_REALIGN_STACK \
+#define DL_RUNTIME_RESOLVE_REALIGN_STACK \
   (VEC_SIZE > DL_STACK_ALIGNMENT \
-   && VEC_SIZE > DL_RUNIME_UNALIGNED_VEC_SIZE)
+   && VEC_SIZE > DL_RUNTIME_UNALIGNED_VEC_SIZE)
 
 /* Align vector register save area to 16 bytes.  */
 #define REGISTER_SAVE_VEC_OFF	0
@@ -76,7 +80,7 @@ 
 #ifdef HAVE_AVX512_ASM_SUPPORT
 # define VEC_SIZE		64
 # define VMOVA			vmovdqa64
-# if DL_RUNIME_RESOLVE_REALIGN_STACK || VEC_SIZE <= DL_STACK_ALIGNMENT
+# if DL_RUNTIME_RESOLVE_REALIGN_STACK || VEC_SIZE <= DL_STACK_ALIGNMENT
 #  define VMOV			vmovdqa64
 # else
 #  define VMOV			vmovdqu64
@@ -100,7 +104,7 @@  strong_alias (_dl_runtime_profile_avx, _dl_runtime_profile_avx512)
 
 #define VEC_SIZE		32
 #define VMOVA			vmovdqa
-#if DL_RUNIME_RESOLVE_REALIGN_STACK || VEC_SIZE <= DL_STACK_ALIGNMENT
+#if DL_RUNTIME_RESOLVE_REALIGN_STACK || VEC_SIZE <= DL_STACK_ALIGNMENT
 # define VMOV			vmovdqa
 #else
 # define VMOV			vmovdqu
@@ -119,7 +123,7 @@  strong_alias (_dl_runtime_profile_avx, _dl_runtime_profile_avx512)
 /* movaps/movups is 1-byte shorter.  */
 #define VEC_SIZE		16
 #define VMOVA			movaps
-#if DL_RUNIME_RESOLVE_REALIGN_STACK || VEC_SIZE <= DL_STACK_ALIGNMENT
+#if DL_RUNTIME_RESOLVE_REALIGN_STACK || VEC_SIZE <= DL_STACK_ALIGNMENT
 # define VMOV			movaps
 #else
 # define VMOV			movups
diff --git a/sysdeps/x86_64/dl-trampoline.h b/sysdeps/x86_64/dl-trampoline.h
index f419183..b90836a 100644
--- a/sysdeps/x86_64/dl-trampoline.h
+++ b/sysdeps/x86_64/dl-trampoline.h
@@ -30,7 +30,7 @@ 
 #undef REGISTER_SAVE_AREA
 #undef LOCAL_STORAGE_AREA
 #undef BASE
-#if DL_RUNIME_RESOLVE_REALIGN_STACK
+#if DL_RUNTIME_RESOLVE_REALIGN_STACK
 # define REGISTER_SAVE_AREA	(REGISTER_SAVE_AREA_RAW + 8)
 /* Local stack area before jumping to function address: RBX.  */
 # define LOCAL_STORAGE_AREA	8
@@ -57,7 +57,7 @@ 
 	cfi_startproc
 _dl_runtime_resolve:
 	cfi_adjust_cfa_offset(16) # Incorporate PLT
-#if DL_RUNIME_RESOLVE_REALIGN_STACK
+#if DL_RUNTIME_RESOLVE_REALIGN_STACK
 # if LOCAL_STORAGE_AREA != 8
 #  error LOCAL_STORAGE_AREA must be 8
 # endif
@@ -146,7 +146,7 @@  _dl_runtime_resolve:
 	VMOV (REGISTER_SAVE_VEC_OFF + VEC_SIZE * 5)(%rsp), %VEC(5)
 	VMOV (REGISTER_SAVE_VEC_OFF + VEC_SIZE * 6)(%rsp), %VEC(6)
 	VMOV (REGISTER_SAVE_VEC_OFF + VEC_SIZE * 7)(%rsp), %VEC(7)
-#if DL_RUNIME_RESOLVE_REALIGN_STACK
+#if DL_RUNTIME_RESOLVE_REALIGN_STACK
 	mov %RBX_LP, %RSP_LP
 	cfi_def_cfa_register(%rsp)
 	movq (%rsp), %rbx
-- 
2.5.0