[v1,2/3] x86: Remove memcmp-sse4.S

Message ID 20220415055132.1257272-2-goldstein.w.n@gmail.com
State Superseded
Headers
Series [v1,1/3] x86: Optimize memcmp SSE2 in memcmp.S |

Checks

Context Check Description
dj/TryBot-apply_patch success Patch applied to master at the time it was sent

Commit Message

Noah Goldstein April 15, 2022, 5:51 a.m. UTC
  Code didn't actually use any sse4 instructions. The new memcmp-sse2
implementation is also faster.

geometric_mean(N=20) of page cross cases SSE2 / SSE4: 0.905

Note there are two regressions prefering SSE2 for Size = 1 and Size =
65.

Size = 1:
size, align0, align1, ret, New Time/Old Time
   1,      1,      1,   0,               1.2
   1,      1,      1,   1,             1.197
   1,      1,      1,  -1,               1.2

This is intentional. Size == 1 is significantly less hot based on
profiles of GCC11 and Python3 than sizes [4, 8] (which is made
hotter).

Python3 Size = 1        -> 13.64%
Python3 Size = [4, 8]   -> 60.92%

GCC11   Size = 1        ->  1.29%
GCC11   Size = [4, 8]   -> 33.86%

size, align0, align1, ret, New Time/Old Time
   4,      4,      4,   0,             0.622
   4,      4,      4,   1,             0.797
   4,      4,      4,  -1,             0.805
   5,      5,      5,   0,             0.623
   5,      5,      5,   1,             0.777
   5,      5,      5,  -1,             0.802
   6,      6,      6,   0,             0.625
   6,      6,      6,   1,             0.813
   6,      6,      6,  -1,             0.788
   7,      7,      7,   0,             0.625
   7,      7,      7,   1,             0.799
   7,      7,      7,  -1,             0.795
   8,      8,      8,   0,             0.625
   8,      8,      8,   1,             0.848
   8,      8,      8,  -1,             0.914
   9,      9,      9,   0,             0.625

Size = 65:
size, align0, align1, ret, New Time/Old Time
  65,      0,      0,   0,             1.103
  65,      0,      0,   1,             1.216
  65,      0,      0,  -1,             1.227
  65,     65,      0,   0,             1.091
  65,      0,     65,   1,              1.19
  65,     65,     65,  -1,             1.215

This is because A) the checks in range [65, 96] are now unrolled 2x
and B) because smaller values <= 16 are now given a hotter path. By
contrast the SSE4 version has a branch for Size = 80. The unrolled
version has get better performance for returns which need both
comparisons.

size, align0, align1, ret, New Time/Old Time
 128,      4,      8,   0,             0.858
 128,      4,      8,   1,             0.879
 128,      4,      8,  -1,             0.888

As well, out of microbenchmark environments that are not full
predictable the branch will have a real-cost.
---
 sysdeps/x86_64/multiarch/Makefile          | 2 --
 sysdeps/x86_64/multiarch/ifunc-impl-list.c | 4 ----
 sysdeps/x86_64/multiarch/ifunc-memcmp.h    | 4 ----
 3 files changed, 10 deletions(-)
  

Comments

H.J. Lu April 15, 2022, 5:20 p.m. UTC | #1
On Thu, Apr 14, 2022 at 10:51 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Code didn't actually use any sse4 instructions. The new memcmp-sse2
> implementation is also faster.

Please mention that SSE4.1 ptest instruction was removed by

commit 2f9062d7171850451e6044ef78d91ff8c017b9c0
Author: Noah Goldstein <goldstein.w.n@gmail.com>
Date:   Wed Nov 10 16:18:56 2021 -0600

    x86: Shrink memcmp-sse4.S code size

> geometric_mean(N=20) of page cross cases SSE2 / SSE4: 0.905
>
> Note there are two regressions prefering SSE2 for Size = 1 and Size =
> 65.
>
> Size = 1:
> size, align0, align1, ret, New Time/Old Time
>    1,      1,      1,   0,               1.2
>    1,      1,      1,   1,             1.197
>    1,      1,      1,  -1,               1.2
>
> This is intentional. Size == 1 is significantly less hot based on
> profiles of GCC11 and Python3 than sizes [4, 8] (which is made
> hotter).
>
> Python3 Size = 1        -> 13.64%
> Python3 Size = [4, 8]   -> 60.92%
>
> GCC11   Size = 1        ->  1.29%
> GCC11   Size = [4, 8]   -> 33.86%
>
> size, align0, align1, ret, New Time/Old Time
>    4,      4,      4,   0,             0.622
>    4,      4,      4,   1,             0.797
>    4,      4,      4,  -1,             0.805
>    5,      5,      5,   0,             0.623
>    5,      5,      5,   1,             0.777
>    5,      5,      5,  -1,             0.802
>    6,      6,      6,   0,             0.625
>    6,      6,      6,   1,             0.813
>    6,      6,      6,  -1,             0.788
>    7,      7,      7,   0,             0.625
>    7,      7,      7,   1,             0.799
>    7,      7,      7,  -1,             0.795
>    8,      8,      8,   0,             0.625
>    8,      8,      8,   1,             0.848
>    8,      8,      8,  -1,             0.914
>    9,      9,      9,   0,             0.625
>
> Size = 65:
> size, align0, align1, ret, New Time/Old Time
>   65,      0,      0,   0,             1.103
>   65,      0,      0,   1,             1.216
>   65,      0,      0,  -1,             1.227
>   65,     65,      0,   0,             1.091
>   65,      0,     65,   1,              1.19
>   65,     65,     65,  -1,             1.215
>
> This is because A) the checks in range [65, 96] are now unrolled 2x
> and B) because smaller values <= 16 are now given a hotter path. By
> contrast the SSE4 version has a branch for Size = 80. The unrolled
> version has get better performance for returns which need both
> comparisons.
>
> size, align0, align1, ret, New Time/Old Time
>  128,      4,      8,   0,             0.858
>  128,      4,      8,   1,             0.879
>  128,      4,      8,  -1,             0.888
>
> As well, out of microbenchmark environments that are not full
> predictable the branch will have a real-cost.
> ---
>  sysdeps/x86_64/multiarch/Makefile          | 2 --
>  sysdeps/x86_64/multiarch/ifunc-impl-list.c | 4 ----
>  sysdeps/x86_64/multiarch/ifunc-memcmp.h    | 4 ----
>  3 files changed, 10 deletions(-)
>

Please also remove memcmp-sse4.S.

> diff --git a/sysdeps/x86_64/multiarch/Makefile b/sysdeps/x86_64/multiarch/Makefile
> index b573966966..0400ea332b 100644
> --- a/sysdeps/x86_64/multiarch/Makefile
> +++ b/sysdeps/x86_64/multiarch/Makefile
> @@ -11,7 +11,6 @@ sysdep_routines += \
>    memcmp-avx2-movbe-rtm \
>    memcmp-evex-movbe \
>    memcmp-sse2 \
> -  memcmp-sse4 \
>    memcmpeq-avx2 \
>    memcmpeq-avx2-rtm \
>    memcmpeq-evex \
> @@ -164,7 +163,6 @@ sysdep_routines += \
>    wmemcmp-avx2-movbe-rtm \
>    wmemcmp-evex-movbe \
>    wmemcmp-sse2 \
> -  wmemcmp-sse4 \
>  # sysdep_routines
>  endif
>
> diff --git a/sysdeps/x86_64/multiarch/ifunc-impl-list.c b/sysdeps/x86_64/multiarch/ifunc-impl-list.c
> index c6008a73ed..a8afcf81bb 100644
> --- a/sysdeps/x86_64/multiarch/ifunc-impl-list.c
> +++ b/sysdeps/x86_64/multiarch/ifunc-impl-list.c
> @@ -96,8 +96,6 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
>                                && CPU_FEATURE_USABLE (BMI2)
>                                && CPU_FEATURE_USABLE (MOVBE)),
>                               __memcmp_evex_movbe)
> -             IFUNC_IMPL_ADD (array, i, memcmp, CPU_FEATURE_USABLE (SSE4_1),
> -                             __memcmp_sse4_1)
>               IFUNC_IMPL_ADD (array, i, memcmp, 1, __memcmp_sse2))
>
>  #ifdef SHARED
> @@ -809,8 +807,6 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
>                                && CPU_FEATURE_USABLE (BMI2)
>                                && CPU_FEATURE_USABLE (MOVBE)),
>                               __wmemcmp_evex_movbe)
> -             IFUNC_IMPL_ADD (array, i, wmemcmp, CPU_FEATURE_USABLE (SSE4_1),
> -                             __wmemcmp_sse4_1)
>               IFUNC_IMPL_ADD (array, i, wmemcmp, 1, __wmemcmp_sse2))
>
>    /* Support sysdeps/x86_64/multiarch/wmemset.c.  */
> diff --git a/sysdeps/x86_64/multiarch/ifunc-memcmp.h b/sysdeps/x86_64/multiarch/ifunc-memcmp.h
> index 44759a3ad5..c743970fe3 100644
> --- a/sysdeps/x86_64/multiarch/ifunc-memcmp.h
> +++ b/sysdeps/x86_64/multiarch/ifunc-memcmp.h
> @@ -20,7 +20,6 @@
>  # include <init-arch.h>
>
>  extern __typeof (REDIRECT_NAME) OPTIMIZE (sse2) attribute_hidden;
> -extern __typeof (REDIRECT_NAME) OPTIMIZE (sse4_1) attribute_hidden;
>  extern __typeof (REDIRECT_NAME) OPTIMIZE (avx2_movbe) attribute_hidden;
>  extern __typeof (REDIRECT_NAME) OPTIMIZE (avx2_movbe_rtm) attribute_hidden;
>  extern __typeof (REDIRECT_NAME) OPTIMIZE (evex_movbe) attribute_hidden;
> @@ -46,8 +45,5 @@ IFUNC_SELECTOR (void)
>         return OPTIMIZE (avx2_movbe);
>      }
>
> -  if (CPU_FEATURE_USABLE_P (cpu_features, SSE4_1))
> -    return OPTIMIZE (sse4_1);
> -
>    return OPTIMIZE (sse2);
>  }
> --
> 2.25.1
>

Thanks.
  
Noah Goldstein April 15, 2022, 5:29 p.m. UTC | #2
On Fri, Apr 15, 2022 at 12:21 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Thu, Apr 14, 2022 at 10:51 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > Code didn't actually use any sse4 instructions. The new memcmp-sse2
> > implementation is also faster.
>
> Please mention that SSE4.1 ptest instruction was removed by

Fixed in v3.
>
> commit 2f9062d7171850451e6044ef78d91ff8c017b9c0
> Author: Noah Goldstein <goldstein.w.n@gmail.com>
> Date:   Wed Nov 10 16:18:56 2021 -0600
>
>     x86: Shrink memcmp-sse4.S code size
>
> > geometric_mean(N=20) of page cross cases SSE2 / SSE4: 0.905
> >
> > Note there are two regressions prefering SSE2 for Size = 1 and Size =
> > 65.
> >
> > Size = 1:
> > size, align0, align1, ret, New Time/Old Time
> >    1,      1,      1,   0,               1.2
> >    1,      1,      1,   1,             1.197
> >    1,      1,      1,  -1,               1.2
> >
> > This is intentional. Size == 1 is significantly less hot based on
> > profiles of GCC11 and Python3 than sizes [4, 8] (which is made
> > hotter).
> >
> > Python3 Size = 1        -> 13.64%
> > Python3 Size = [4, 8]   -> 60.92%
> >
> > GCC11   Size = 1        ->  1.29%
> > GCC11   Size = [4, 8]   -> 33.86%
> >
> > size, align0, align1, ret, New Time/Old Time
> >    4,      4,      4,   0,             0.622
> >    4,      4,      4,   1,             0.797
> >    4,      4,      4,  -1,             0.805
> >    5,      5,      5,   0,             0.623
> >    5,      5,      5,   1,             0.777
> >    5,      5,      5,  -1,             0.802
> >    6,      6,      6,   0,             0.625
> >    6,      6,      6,   1,             0.813
> >    6,      6,      6,  -1,             0.788
> >    7,      7,      7,   0,             0.625
> >    7,      7,      7,   1,             0.799
> >    7,      7,      7,  -1,             0.795
> >    8,      8,      8,   0,             0.625
> >    8,      8,      8,   1,             0.848
> >    8,      8,      8,  -1,             0.914
> >    9,      9,      9,   0,             0.625
> >
> > Size = 65:
> > size, align0, align1, ret, New Time/Old Time
> >   65,      0,      0,   0,             1.103
> >   65,      0,      0,   1,             1.216
> >   65,      0,      0,  -1,             1.227
> >   65,     65,      0,   0,             1.091
> >   65,      0,     65,   1,              1.19
> >   65,     65,     65,  -1,             1.215
> >
> > This is because A) the checks in range [65, 96] are now unrolled 2x
> > and B) because smaller values <= 16 are now given a hotter path. By
> > contrast the SSE4 version has a branch for Size = 80. The unrolled
> > version has get better performance for returns which need both
> > comparisons.
> >
> > size, align0, align1, ret, New Time/Old Time
> >  128,      4,      8,   0,             0.858
> >  128,      4,      8,   1,             0.879
> >  128,      4,      8,  -1,             0.888
> >
> > As well, out of microbenchmark environments that are not full
> > predictable the branch will have a real-cost.
> > ---
> >  sysdeps/x86_64/multiarch/Makefile          | 2 --
> >  sysdeps/x86_64/multiarch/ifunc-impl-list.c | 4 ----
> >  sysdeps/x86_64/multiarch/ifunc-memcmp.h    | 4 ----
> >  3 files changed, 10 deletions(-)
> >
>
> Please also remove memcmp-sse4.S.

Whoops. Fixed in v2.

>
> > diff --git a/sysdeps/x86_64/multiarch/Makefile b/sysdeps/x86_64/multiarch/Makefile
> > index b573966966..0400ea332b 100644
> > --- a/sysdeps/x86_64/multiarch/Makefile
> > +++ b/sysdeps/x86_64/multiarch/Makefile
> > @@ -11,7 +11,6 @@ sysdep_routines += \
> >    memcmp-avx2-movbe-rtm \
> >    memcmp-evex-movbe \
> >    memcmp-sse2 \
> > -  memcmp-sse4 \
> >    memcmpeq-avx2 \
> >    memcmpeq-avx2-rtm \
> >    memcmpeq-evex \
> > @@ -164,7 +163,6 @@ sysdep_routines += \
> >    wmemcmp-avx2-movbe-rtm \
> >    wmemcmp-evex-movbe \
> >    wmemcmp-sse2 \
> > -  wmemcmp-sse4 \
> >  # sysdep_routines
> >  endif
> >
> > diff --git a/sysdeps/x86_64/multiarch/ifunc-impl-list.c b/sysdeps/x86_64/multiarch/ifunc-impl-list.c
> > index c6008a73ed..a8afcf81bb 100644
> > --- a/sysdeps/x86_64/multiarch/ifunc-impl-list.c
> > +++ b/sysdeps/x86_64/multiarch/ifunc-impl-list.c
> > @@ -96,8 +96,6 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
> >                                && CPU_FEATURE_USABLE (BMI2)
> >                                && CPU_FEATURE_USABLE (MOVBE)),
> >                               __memcmp_evex_movbe)
> > -             IFUNC_IMPL_ADD (array, i, memcmp, CPU_FEATURE_USABLE (SSE4_1),
> > -                             __memcmp_sse4_1)
> >               IFUNC_IMPL_ADD (array, i, memcmp, 1, __memcmp_sse2))
> >
> >  #ifdef SHARED
> > @@ -809,8 +807,6 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
> >                                && CPU_FEATURE_USABLE (BMI2)
> >                                && CPU_FEATURE_USABLE (MOVBE)),
> >                               __wmemcmp_evex_movbe)
> > -             IFUNC_IMPL_ADD (array, i, wmemcmp, CPU_FEATURE_USABLE (SSE4_1),
> > -                             __wmemcmp_sse4_1)
> >               IFUNC_IMPL_ADD (array, i, wmemcmp, 1, __wmemcmp_sse2))
> >
> >    /* Support sysdeps/x86_64/multiarch/wmemset.c.  */
> > diff --git a/sysdeps/x86_64/multiarch/ifunc-memcmp.h b/sysdeps/x86_64/multiarch/ifunc-memcmp.h
> > index 44759a3ad5..c743970fe3 100644
> > --- a/sysdeps/x86_64/multiarch/ifunc-memcmp.h
> > +++ b/sysdeps/x86_64/multiarch/ifunc-memcmp.h
> > @@ -20,7 +20,6 @@
> >  # include <init-arch.h>
> >
> >  extern __typeof (REDIRECT_NAME) OPTIMIZE (sse2) attribute_hidden;
> > -extern __typeof (REDIRECT_NAME) OPTIMIZE (sse4_1) attribute_hidden;
> >  extern __typeof (REDIRECT_NAME) OPTIMIZE (avx2_movbe) attribute_hidden;
> >  extern __typeof (REDIRECT_NAME) OPTIMIZE (avx2_movbe_rtm) attribute_hidden;
> >  extern __typeof (REDIRECT_NAME) OPTIMIZE (evex_movbe) attribute_hidden;
> > @@ -46,8 +45,5 @@ IFUNC_SELECTOR (void)
> >         return OPTIMIZE (avx2_movbe);
> >      }
> >
> > -  if (CPU_FEATURE_USABLE_P (cpu_features, SSE4_1))
> > -    return OPTIMIZE (sse4_1);
> > -
> >    return OPTIMIZE (sse2);
> >  }
> > --
> > 2.25.1
> >
>
> Thanks.
>
> --
> H.J.
  

Patch

diff --git a/sysdeps/x86_64/multiarch/Makefile b/sysdeps/x86_64/multiarch/Makefile
index b573966966..0400ea332b 100644
--- a/sysdeps/x86_64/multiarch/Makefile
+++ b/sysdeps/x86_64/multiarch/Makefile
@@ -11,7 +11,6 @@  sysdep_routines += \
   memcmp-avx2-movbe-rtm \
   memcmp-evex-movbe \
   memcmp-sse2 \
-  memcmp-sse4 \
   memcmpeq-avx2 \
   memcmpeq-avx2-rtm \
   memcmpeq-evex \
@@ -164,7 +163,6 @@  sysdep_routines += \
   wmemcmp-avx2-movbe-rtm \
   wmemcmp-evex-movbe \
   wmemcmp-sse2 \
-  wmemcmp-sse4 \
 # sysdep_routines
 endif
 
diff --git a/sysdeps/x86_64/multiarch/ifunc-impl-list.c b/sysdeps/x86_64/multiarch/ifunc-impl-list.c
index c6008a73ed..a8afcf81bb 100644
--- a/sysdeps/x86_64/multiarch/ifunc-impl-list.c
+++ b/sysdeps/x86_64/multiarch/ifunc-impl-list.c
@@ -96,8 +96,6 @@  __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
 			       && CPU_FEATURE_USABLE (BMI2)
 			       && CPU_FEATURE_USABLE (MOVBE)),
 			      __memcmp_evex_movbe)
-	      IFUNC_IMPL_ADD (array, i, memcmp, CPU_FEATURE_USABLE (SSE4_1),
-			      __memcmp_sse4_1)
 	      IFUNC_IMPL_ADD (array, i, memcmp, 1, __memcmp_sse2))
 
 #ifdef SHARED
@@ -809,8 +807,6 @@  __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array,
 			       && CPU_FEATURE_USABLE (BMI2)
 			       && CPU_FEATURE_USABLE (MOVBE)),
 			      __wmemcmp_evex_movbe)
-	      IFUNC_IMPL_ADD (array, i, wmemcmp, CPU_FEATURE_USABLE (SSE4_1),
-			      __wmemcmp_sse4_1)
 	      IFUNC_IMPL_ADD (array, i, wmemcmp, 1, __wmemcmp_sse2))
 
   /* Support sysdeps/x86_64/multiarch/wmemset.c.  */
diff --git a/sysdeps/x86_64/multiarch/ifunc-memcmp.h b/sysdeps/x86_64/multiarch/ifunc-memcmp.h
index 44759a3ad5..c743970fe3 100644
--- a/sysdeps/x86_64/multiarch/ifunc-memcmp.h
+++ b/sysdeps/x86_64/multiarch/ifunc-memcmp.h
@@ -20,7 +20,6 @@ 
 # include <init-arch.h>
 
 extern __typeof (REDIRECT_NAME) OPTIMIZE (sse2) attribute_hidden;
-extern __typeof (REDIRECT_NAME) OPTIMIZE (sse4_1) attribute_hidden;
 extern __typeof (REDIRECT_NAME) OPTIMIZE (avx2_movbe) attribute_hidden;
 extern __typeof (REDIRECT_NAME) OPTIMIZE (avx2_movbe_rtm) attribute_hidden;
 extern __typeof (REDIRECT_NAME) OPTIMIZE (evex_movbe) attribute_hidden;
@@ -46,8 +45,5 @@  IFUNC_SELECTOR (void)
 	return OPTIMIZE (avx2_movbe);
     }
 
-  if (CPU_FEATURE_USABLE_P (cpu_features, SSE4_1))
-    return OPTIMIZE (sse4_1);
-
   return OPTIMIZE (sse2);
 }