x86: Adding an upper bound for Enhanced REP MOVSB.

Message ID 20210111104301.205094-1-sajan.karumanchi@amd.com
State Superseded
Headers
Series x86: Adding an upper bound for Enhanced REP MOVSB. |

Commit Message

develop--- via Libc-alpha Jan. 11, 2021, 10:43 a.m. UTC
  From: Sajan Karumanchi <sajan.karumanchi@amd.com>

In the process of optimizing memcpy for AMD machines, we have found the
vector move operations are outperforming enhanced REP MOVSB for data
transfers above the L2 cache size on Zen3 architectures.
To handle this use case, we are adding an upper bound parameter on
enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'.
As per large-bench results, we are configuring this parameter to the
L2 cache size for AMD machines and applicable from Zen3 architecture
supporting the ERMS feature.
For architectures other than AMD, it is the computed value of
non-temporal threshold parameter.

Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
---
 sysdeps/x86/cacheinfo.h                            | 14 ++++++++++++++
 .../x86_64/multiarch/memmove-vec-unaligned-erms.S  |  2 +-
 2 files changed, 15 insertions(+), 1 deletion(-)
  

Comments

H.J. Lu Jan. 11, 2021, 5:27 p.m. UTC | #1
On Mon, Jan 11, 2021 at 2:43 AM <sajan.karumanchi@amd.com> wrote:
>
> From: Sajan Karumanchi <sajan.karumanchi@amd.com>
>
> In the process of optimizing memcpy for AMD machines, we have found the
> vector move operations are outperforming enhanced REP MOVSB for data
> transfers above the L2 cache size on Zen3 architectures.
> To handle this use case, we are adding an upper bound parameter on
> enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'.
> As per large-bench results, we are configuring this parameter to the
> L2 cache size for AMD machines and applicable from Zen3 architecture
> supporting the ERMS feature.
> For architectures other than AMD, it is the computed value of
> non-temporal threshold parameter.
>
> Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
> ---
>  sysdeps/x86/cacheinfo.h                            | 14 ++++++++++++++
>  .../x86_64/multiarch/memmove-vec-unaligned-erms.S  |  2 +-
>  2 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> index 00d2d8a52a..00c3a823f0 100644
> --- a/sysdeps/x86/cacheinfo.h
> +++ b/sysdeps/x86/cacheinfo.h
> @@ -45,6 +45,9 @@ long int __x86_rep_movsb_threshold attribute_hidden = 2048;
>  /* Threshold to use Enhanced REP STOSB.  */
>  long int __x86_rep_stosb_threshold attribute_hidden = 2048;
>
> +/* Threshold to stop using Enhanced REP MOVSB.  */
> +long int __x86_max_rep_movsb_threshold attribute_hidden = 512 * 1024;

The default should be the same as __x86_shared_non_temporal_threshold.

>  static void
>  get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>                        long int core)
> @@ -351,6 +354,11 @@ init_cacheinfo (void)
>               /* Account for exclusive L2 and L3 caches.  */
>               shared += core;
>              }
> +         /* ERMS feature is implemented from Zen3 architecture and it is
> +            performing poorly for data above L2 cache size. Henceforth, adding
> +            an upper bound threshold parameter to limit the usage of Enhanced
> +            REP MOVSB operations and setting its value to L2 cache size.  */
> +         __x86_max_rep_movsb_threshold = core;
>        }
>      }
>
> @@ -423,6 +431,12 @@ init_cacheinfo (void)
>    else
>      __x86_rep_movsb_threshold = rep_movsb_threshold;
>
> +  /* Setting the upper bound of ERMS to the known default value of
> +     non-temporal threshold for architectures other than AMD.  */
> +  if (cpu_features->basic.kind != arch_kind_amd)
> +    __x86_max_rep_movsb_threshold = __x86_shared_non_temporal_threshold;
> +
> +
>  # if HAVE_TUNABLES
>    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
>  # endif
> diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> index 0980c95378..5682e7a9fd 100644
> --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> @@ -240,7 +240,7 @@ L(return):
>         ret
>
>  L(movsb):
> -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> +       cmp     __x86_max_rep_movsb_threshold(%rip), %RDX_LP

Please add some comments here and update the algorithm at the
beginning of the function.

>         jae     L(more_8x_vec)
>         cmpq    %rsi, %rdi
>         jb      1f
> --
> 2.25.1
>
  
develop--- via Libc-alpha Jan. 12, 2021, 6:56 p.m. UTC | #2
[AMD Public Use]

Hi H.J.Lu,

I have pushed the patch with updated comments and algorithm. As __x86_shared_non_temporal_threshold is a variable(not a constant value) and is computed during initialization phase, I cannot set this as a default value for '__x86_max_rep_movsb_threshold'.

Thanks & Regards,
Sajan K.

-----Original Message-----
From: H.J. Lu <hjl.tools@gmail.com> 
Sent: Monday, January 11, 2021 10:57 PM
To: Karumanchi, Sajan <Sajan.Karumanchi@amd.com>
Cc: GNU C Library <libc-alpha@sourceware.org>; Carlos O'Donell <carlos@redhat.com>; Florian Weimer <fweimer@redhat.com>; Mallappa, Premachandra <Premachandra.Mallappa@amd.com>
Subject: Re: [PATCH] x86: Adding an upper bound for Enhanced REP MOVSB.

[CAUTION: External Email]

On Mon, Jan 11, 2021 at 2:43 AM <sajan.karumanchi@amd.com> wrote:
>
> From: Sajan Karumanchi <sajan.karumanchi@amd.com>
>
> In the process of optimizing memcpy for AMD machines, we have found 
> the vector move operations are outperforming enhanced REP MOVSB for 
> data transfers above the L2 cache size on Zen3 architectures.
> To handle this use case, we are adding an upper bound parameter on 
> enhanced REP MOVSB:'__x86_max_rep_movsb_threshold'.
> As per large-bench results, we are configuring this parameter to the
> L2 cache size for AMD machines and applicable from Zen3 architecture 
> supporting the ERMS feature.
> For architectures other than AMD, it is the computed value of 
> non-temporal threshold parameter.
>
> Reviewed-by: Premachandra Mallappa <premachandra.mallappa@amd.com>
> ---
>  sysdeps/x86/cacheinfo.h                            | 14 ++++++++++++++
>  .../x86_64/multiarch/memmove-vec-unaligned-erms.S  |  2 +-
>  2 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h index 
> 00d2d8a52a..00c3a823f0 100644
> --- a/sysdeps/x86/cacheinfo.h
> +++ b/sysdeps/x86/cacheinfo.h
> @@ -45,6 +45,9 @@ long int __x86_rep_movsb_threshold attribute_hidden 
> = 2048;
>  /* Threshold to use Enhanced REP STOSB.  */  long int 
> __x86_rep_stosb_threshold attribute_hidden = 2048;
>
> +/* Threshold to stop using Enhanced REP MOVSB.  */ long int 
> +__x86_max_rep_movsb_threshold attribute_hidden = 512 * 1024;

The default should be the same as __x86_shared_non_temporal_threshold.

>  static void
>  get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>                        long int core)
> @@ -351,6 +354,11 @@ init_cacheinfo (void)
>               /* Account for exclusive L2 and L3 caches.  */
>               shared += core;
>              }
> +         /* ERMS feature is implemented from Zen3 architecture and it is
> +            performing poorly for data above L2 cache size. Henceforth, adding
> +            an upper bound threshold parameter to limit the usage of Enhanced
> +            REP MOVSB operations and setting its value to L2 cache size.  */
> +         __x86_max_rep_movsb_threshold = core;
>        }
>      }
>
> @@ -423,6 +431,12 @@ init_cacheinfo (void)
>    else
>      __x86_rep_movsb_threshold = rep_movsb_threshold;
>
> +  /* Setting the upper bound of ERMS to the known default value of
> +     non-temporal threshold for architectures other than AMD.  */  if 
> + (cpu_features->basic.kind != arch_kind_amd)
> +    __x86_max_rep_movsb_threshold = 
> + __x86_shared_non_temporal_threshold;
> +
> +
>  # if HAVE_TUNABLES
>    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
>  # endif
> diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S 
> b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> index 0980c95378..5682e7a9fd 100644
> --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> @@ -240,7 +240,7 @@ L(return):
>         ret
>
>  L(movsb):
> -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> +       cmp     __x86_max_rep_movsb_threshold(%rip), %RDX_LP

Please add some comments here and update the algorithm at the beginning of the function.

>         jae     L(more_8x_vec)
>         cmpq    %rsi, %rdi
>         jb      1f
> --
> 2.25.1
>


--
H.J.
  

Patch

diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
index 00d2d8a52a..00c3a823f0 100644
--- a/sysdeps/x86/cacheinfo.h
+++ b/sysdeps/x86/cacheinfo.h
@@ -45,6 +45,9 @@  long int __x86_rep_movsb_threshold attribute_hidden = 2048;
 /* Threshold to use Enhanced REP STOSB.  */
 long int __x86_rep_stosb_threshold attribute_hidden = 2048;
 
+/* Threshold to stop using Enhanced REP MOVSB.  */
+long int __x86_max_rep_movsb_threshold attribute_hidden = 512 * 1024;
+
 static void
 get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
 		       long int core)
@@ -351,6 +354,11 @@  init_cacheinfo (void)
 	      /* Account for exclusive L2 and L3 caches.  */
 	      shared += core;
             }
+	  /* ERMS feature is implemented from Zen3 architecture and it is
+	     performing poorly for data above L2 cache size. Henceforth, adding
+	     an upper bound threshold parameter to limit the usage of Enhanced
+	     REP MOVSB operations and setting its value to L2 cache size.  */
+	  __x86_max_rep_movsb_threshold = core;
       }
     }
 
@@ -423,6 +431,12 @@  init_cacheinfo (void)
   else
     __x86_rep_movsb_threshold = rep_movsb_threshold;
 
+  /* Setting the upper bound of ERMS to the known default value of
+     non-temporal threshold for architectures other than AMD.  */
+  if (cpu_features->basic.kind != arch_kind_amd)
+    __x86_max_rep_movsb_threshold = __x86_shared_non_temporal_threshold;
+
+
 # if HAVE_TUNABLES
   __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
 # endif
diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
index 0980c95378..5682e7a9fd 100644
--- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
+++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
@@ -240,7 +240,7 @@  L(return):
 	ret
 
 L(movsb):
-	cmp	__x86_shared_non_temporal_threshold(%rip), %RDX_LP
+	cmp     __x86_max_rep_movsb_threshold(%rip), %RDX_LP
 	jae	L(more_8x_vec)
 	cmpq	%rsi, %rdi
 	jb	1f