[v1] x86: Prevent SIG11 in memcmp-sse2 when data is concurrently modified [BZ #29863]

Message ID 20221214001147.2814047-1-goldstein.w.n@gmail.com
State Superseded
Headers
Series [v1] x86: Prevent SIG11 in memcmp-sse2 when data is concurrently modified [BZ #29863] |

Checks

Context Check Description
dj/TryBot-apply_patch success Patch applied to master at the time it was sent
dj/TryBot-32bit success Build for i686

Commit Message

Noah Goldstein Dec. 14, 2022, 12:11 a.m. UTC
  In the case of INCORRECT usage of `memcmp(a, b, N)` where `a` and `b`
are concurrently modified as `memcmp` runs, there can be a SIG11 in
`L(ret_nonzero_vec_end_0)` because the sequential logic assumes
that `(rdx - 32 + rax)` is a positive 32-bit integer.

To be clear, this "fix" does not mean this usage of `memcmp` is
supported. `memcmp` is incorrect when the values of `a` and/or `b`
are modified while its running, and that incorrectness may manifest
itself as a SIG-11. That being said, if we can make the results
less dramatic with no cost to regular uses cases, there is no harm
in doing so.

The fix replaces a 32-bit `addl %edx, %eax` with the 64-bit variant
`addq %rdx, %rax`. The 1-extra byte of code size from using the
64-bit instruction doesn't contribute to overall code size as the
next target is aligned and has multiple bytes of `nop` padding
before it. As well all the logic between the add and `ret` still
fits in the same fetch block, so the cost of this change is
basically zero.

The sequential logic makes the assume behind the following code:
```
    /*
     * rsi = a
     * rdi = b
     * rdx = len - 32
     */
    /* cmp a[0:15] and b[0:15]. Since length is known to be [17, 32]
    in this case, this check is also assume to cover a[0:(31 - len)]
    and b[0:(31 - len)].  */
	movups	(%rsi), %xmm0
	movups	(%rdi), %xmm1
	PCMPEQ	%xmm0, %xmm1
	pmovmskb %xmm1, %eax
	subl	%ecx, %eax
	jnz	L(END_NEQ)

    /* cmp a[len-16:len-1] and b[len-16:len-1].  */
    movups	16(%rsi, %rdx), %xmm0
	movups	16(%rdi, %rdx), %xmm1
	PCMPEQ	%xmm0, %xmm1
	pmovmskb %xmm1, %eax
	subl	%ecx, %eax
	jnz	L(END_NEQ2)
    ret

L(END2):
    /* Position first mismatch.  */
    bsfl %eax, %eax

    /* BUG IS FROM THIS. The sequential version is able to assume this
    value is a positive 32-bit value because first check included
    bytes in range a[0:(31 - len)], b[0:(31 - len)] so `eax` must be
    greater than `31 - len` so the minimum value of `edx` + `eax` is
    `(len - 32) + (32 - len) >= 0`. In the concurrent case, however,
    `a` or `b` could have been changed so a mismatch in `eax` less or
    equal than `(31 - len)` is possible (the new low bound in `(16 -
    len)`. This can result in a negative 32-bit signed integer, which
    when non-sign extended to 64-bits is a random large value out of
    bounds. */
    addl %edx, %eax

    /* Crash here because 32-bit negative number in `eax` non-sign
    extends to out of bounds 64-bit offset.  */
    movzbl 16(%rdi, %rax), %ecx
    movzbl 16(%rsi, %rax), %eax
```

This fix is quite simple, just make the `addl %edx, %eax` 64 bit (i.e
`addq %rdx, %rax`). This prevent the 32-bit non-sign extension
and since `eax` still a low bound of `16 - len` the `rdx + rax`
is bound by `(len - 32) - (16 - len) >= -16`. Since we have a
fixed offset of `16` in the memory access this must be inbounds.
---
 sysdeps/x86_64/multiarch/memcmp-sse2.S | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
  

Comments

Carlos O'Donell Dec. 14, 2022, 2:41 a.m. UTC | #1
Please post a v2. Thanks!

Subject: x86: Prevent SIGSEGV in memcmp-sse2 when data is concurrently modified [BZ #29863]

Replaces SIG11 with SIGSEGV (the documented name of the signal).

On 12/13/22 19:11, Noah Goldstein via Libc-alpha wrote:
> In the case of INCORRECT usage of `memcmp(a, b, N)` where `a` and `b`
> are concurrently modified as `memcmp` runs, there can be a SIG11 in

s/SIG11/SIGSEGV/g

> `L(ret_nonzero_vec_end_0)` because the sequential logic assumes
> that `(rdx - 32 + rax)` is a positive 32-bit integer.
> 
> To be clear, this "fix" does not mean this usage of `memcmp` is
> supported. `memcmp` is incorrect when the values of `a` and/or `b`
> are modified while its running, and that incorrectness may manifest
> itself as a SIG-11. That being said, if we can make the results

s/SIG-11/SIGSEGV/g

> less dramatic with no cost to regular uses cases, there is no harm
> in doing so.

I agree that a user focused change like this is going to be a balance between
keeping it working for an unsupported use case versus the cost to the library.
Given that you've found a low-cost way to support the incorrect but idiomatic
use case then I have no sustained objections to this patch. However, this won't
be the last we hear of this as we continue down the path of optimizing against
a well defined memory model.

> The fix replaces a 32-bit `addl %edx, %eax` with the 64-bit variant
> `addq %rdx, %rax`. The 1-extra byte of code size from using the
> 64-bit instruction doesn't contribute to overall code size as the
> next target is aligned and has multiple bytes of `nop` padding
> before it. As well all the logic between the add and `ret` still
> fits in the same fetch block, so the cost of this change is
> basically zero.

OK.
 
> The sequential logic makes the assume behind the following code:

Suggest:
The relevant sequential logic can be seen in the following code:

> ```
>     /*
>      * rsi = a
>      * rdi = b
>      * rdx = len - 32
>      */
>     /* cmp a[0:15] and b[0:15]. Since length is known to be [17, 32]
>     in this case, this check is also assume to cover a[0:(31 - len)]

s/assume/assumed/g

>     and b[0:(31 - len)].  */
> 	movups	(%rsi), %xmm0
> 	movups	(%rdi), %xmm1
> 	PCMPEQ	%xmm0, %xmm1
> 	pmovmskb %xmm1, %eax
> 	subl	%ecx, %eax
> 	jnz	L(END_NEQ)
> 
>     /* cmp a[len-16:len-1] and b[len-16:len-1].  */
>     movups	16(%rsi, %rdx), %xmm0
> 	movups	16(%rdi, %rdx), %xmm1
> 	PCMPEQ	%xmm0, %xmm1
> 	pmovmskb %xmm1, %eax
> 	subl	%ecx, %eax
> 	jnz	L(END_NEQ2)
>     ret
> 
> L(END2):
>     /* Position first mismatch.  */
>     bsfl %eax, %eax
> 
>     /* BUG IS FROM THIS. The sequential version is able to assume this

s/BUG IS FROM THIS. //g

>     value is a positive 32-bit value because first check included

s/because first/because the first/g

>     bytes in range a[0:(31 - len)], b[0:(31 - len)] so `eax` must be

s/,/ and/g

>     greater than `31 - len` so the minimum value of `edx` + `eax` is
>     `(len - 32) + (32 - len) >= 0`. In the concurrent case, however,
>     `a` or `b` could have been changed so a mismatch in `eax` less or
>     equal than `(31 - len)` is possible (the new low bound in `(16 -

s/in/is/g

>     len)`. This can result in a negative 32-bit signed integer, which
>     when non-sign extended to 64-bits is a random large value out of

s/out of/that is out of/g

>     bounds. */
>     addl %edx, %eax
> 
>     /* Crash here because 32-bit negative number in `eax` non-sign
>     extends to out of bounds 64-bit offset.  */
>     movzbl 16(%rdi, %rax), %ecx
>     movzbl 16(%rsi, %rax), %eax
> ```
> 
> This fix is quite simple, just make the `addl %edx, %eax` 64 bit (i.e
> `addq %rdx, %rax`). This prevent the 32-bit non-sign extension

s/prevent/prevents/g

> and since `eax` still a low bound of `16 - len` the `rdx + rax`

s/still/is still/g

> is bound by `(len - 32) - (16 - len) >= -16`. Since we have a
> fixed offset of `16` in the memory access this must be inbounds.

s/inbounds/in bounds/g

> ---
>  sysdeps/x86_64/multiarch/memcmp-sse2.S | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/sysdeps/x86_64/multiarch/memcmp-sse2.S b/sysdeps/x86_64/multiarch/memcmp-sse2.S
> index afd450d020..34e60e567d 100644
> --- a/sysdeps/x86_64/multiarch/memcmp-sse2.S
> +++ b/sysdeps/x86_64/multiarch/memcmp-sse2.S
> @@ -308,7 +308,7 @@ L(ret_nonzero_vec_end_0):
>  	setg	%dl
>  	leal	-1(%rdx, %rdx), %eax
>  #  else
> -	addl	%edx, %eax
> +	addq	%rdx, %rax

OK. 64-bit addq.

>  	movzbl	(VEC_SIZE * -1 + SIZE_OFFSET)(%rsi, %rax), %ecx
>  	movzbl	(VEC_SIZE * -1 + SIZE_OFFSET)(%rdi, %rax), %eax
>  	subl	%ecx, %eax
  
Andreas Schwab Dec. 14, 2022, 9:04 a.m. UTC | #2
On Dez 13 2022, Carlos O'Donell via Libc-alpha wrote:

> Please post a v2. Thanks!
>
> Subject: x86: Prevent SIGSEGV in memcmp-sse2 when data is concurrently modified [BZ #29863]
>
> Replaces SIG11 with SIGSEGV (the documented name of the signal).

Even better: out-of-bounds access (which can manifest in multitude of
different ways).
  
H.J. Lu Dec. 14, 2022, 4:07 p.m. UTC | #3
On Tue, Dec 13, 2022 at 4:12 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> In the case of INCORRECT usage of `memcmp(a, b, N)` where `a` and `b`
> are concurrently modified as `memcmp` runs, there can be a SIG11 in
> `L(ret_nonzero_vec_end_0)` because the sequential logic assumes
> that `(rdx - 32 + rax)` is a positive 32-bit integer.
>
> To be clear, this "fix" does not mean this usage of `memcmp` is
> supported. `memcmp` is incorrect when the values of `a` and/or `b`
> are modified while its running, and that incorrectness may manifest
> itself as a SIG-11. That being said, if we can make the results
> less dramatic with no cost to regular uses cases, there is no harm
> in doing so.
>
> The fix replaces a 32-bit `addl %edx, %eax` with the 64-bit variant
> `addq %rdx, %rax`. The 1-extra byte of code size from using the
> 64-bit instruction doesn't contribute to overall code size as the
> next target is aligned and has multiple bytes of `nop` padding
> before it. As well all the logic between the add and `ret` still
> fits in the same fetch block, so the cost of this change is
> basically zero.
>
> The sequential logic makes the assume behind the following code:
> ```
>     /*
>      * rsi = a
>      * rdi = b
>      * rdx = len - 32
>      */
>     /* cmp a[0:15] and b[0:15]. Since length is known to be [17, 32]
>     in this case, this check is also assume to cover a[0:(31 - len)]
>     and b[0:(31 - len)].  */
>         movups  (%rsi), %xmm0
>         movups  (%rdi), %xmm1
>         PCMPEQ  %xmm0, %xmm1
>         pmovmskb %xmm1, %eax
>         subl    %ecx, %eax
>         jnz     L(END_NEQ)
>
>     /* cmp a[len-16:len-1] and b[len-16:len-1].  */
>     movups      16(%rsi, %rdx), %xmm0
>         movups  16(%rdi, %rdx), %xmm1
>         PCMPEQ  %xmm0, %xmm1
>         pmovmskb %xmm1, %eax
>         subl    %ecx, %eax
>         jnz     L(END_NEQ2)
>     ret
>
> L(END2):
>     /* Position first mismatch.  */
>     bsfl %eax, %eax
>
>     /* BUG IS FROM THIS. The sequential version is able to assume this
>     value is a positive 32-bit value because first check included
>     bytes in range a[0:(31 - len)], b[0:(31 - len)] so `eax` must be
>     greater than `31 - len` so the minimum value of `edx` + `eax` is
>     `(len - 32) + (32 - len) >= 0`. In the concurrent case, however,
>     `a` or `b` could have been changed so a mismatch in `eax` less or
>     equal than `(31 - len)` is possible (the new low bound in `(16 -
>     len)`. This can result in a negative 32-bit signed integer, which
>     when non-sign extended to 64-bits is a random large value out of
>     bounds. */
>     addl %edx, %eax
>
>     /* Crash here because 32-bit negative number in `eax` non-sign
>     extends to out of bounds 64-bit offset.  */
>     movzbl 16(%rdi, %rax), %ecx
>     movzbl 16(%rsi, %rax), %eax
> ```
>
> This fix is quite simple, just make the `addl %edx, %eax` 64 bit (i.e
> `addq %rdx, %rax`). This prevent the 32-bit non-sign extension
> and since `eax` still a low bound of `16 - len` the `rdx + rax`
> is bound by `(len - 32) - (16 - len) >= -16`. Since we have a
> fixed offset of `16` in the memory access this must be inbounds.
> ---
>  sysdeps/x86_64/multiarch/memcmp-sse2.S | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/sysdeps/x86_64/multiarch/memcmp-sse2.S b/sysdeps/x86_64/multiarch/memcmp-sse2.S
> index afd450d020..34e60e567d 100644
> --- a/sysdeps/x86_64/multiarch/memcmp-sse2.S
> +++ b/sysdeps/x86_64/multiarch/memcmp-sse2.S
> @@ -308,7 +308,7 @@ L(ret_nonzero_vec_end_0):
>         setg    %dl
>         leal    -1(%rdx, %rdx), %eax
>  #  else
> -       addl    %edx, %eax
> +       addq    %rdx, %rax

Please add some comments here and also include the testcase.

>         movzbl  (VEC_SIZE * -1 + SIZE_OFFSET)(%rsi, %rax), %ecx
>         movzbl  (VEC_SIZE * -1 + SIZE_OFFSET)(%rdi, %rax), %eax
>         subl    %ecx, %eax
> --
> 2.34.1
>

Thanks.
  

Patch

diff --git a/sysdeps/x86_64/multiarch/memcmp-sse2.S b/sysdeps/x86_64/multiarch/memcmp-sse2.S
index afd450d020..34e60e567d 100644
--- a/sysdeps/x86_64/multiarch/memcmp-sse2.S
+++ b/sysdeps/x86_64/multiarch/memcmp-sse2.S
@@ -308,7 +308,7 @@  L(ret_nonzero_vec_end_0):
 	setg	%dl
 	leal	-1(%rdx, %rdx), %eax
 #  else
-	addl	%edx, %eax
+	addq	%rdx, %rax
 	movzbl	(VEC_SIZE * -1 + SIZE_OFFSET)(%rsi, %rax), %ecx
 	movzbl	(VEC_SIZE * -1 + SIZE_OFFSET)(%rdi, %rax), %eax
 	subl	%ecx, %eax