[v2] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for NT threshold.

Message ID 20230717201043.105528-1-goldstein.w.n@gmail.com
State New
Headers
Series [v2] x86: Use `3/4*sizeof(per-thread-L3)` as low bound for NT threshold. |

Checks

Context Check Description
redhat-pt-bot/TryBot-apply_patch success Patch applied to master at the time it was sent
linaro-tcwg-bot/tcwg_glibc_build--master-arm success Testing passed
redhat-pt-bot/TryBot-32bit success Build for i686
linaro-tcwg-bot/tcwg_glibc_build--master-aarch64 warning Patch failed to apply
linaro-tcwg-bot/tcwg_glibc_check--master-aarch64 warning Patch failed to apply
linaro-tcwg-bot/tcwg_glibc_check--master-arm warning Patch failed to apply

Commit Message

Noah Goldstein July 17, 2023, 8:10 p.m. UTC
  On some machines we end up with incomplete cache information. This can
make the new calculation of `sizeof(total-L3)/custom-divisor` end up
lower than intended (and lower than the prior value). So reintroduce
the old bound as a lower bound to avoid potentially regressing code
where we don't have complete information to make the decision.
---
 sysdeps/x86/dl-cacheinfo.h | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)
  

Comments

Noah Goldstein July 17, 2023, 8:11 p.m. UTC | #1
On Mon, Jul 17, 2023 at 3:10 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On some machines we end up with incomplete cache information. This can
> make the new calculation of `sizeof(total-L3)/custom-divisor` end up
> lower than intended (and lower than the prior value). So reintroduce
> the old bound as a lower bound to avoid potentially regressing code
> where we don't have complete information to make the decision.
> ---
>  sysdeps/x86/dl-cacheinfo.h | 19 ++++++++++++++-----
>  1 file changed, 14 insertions(+), 5 deletions(-)
>
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index c98fa57a7b..cd4d0351ae 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -614,8 +614,8 @@ get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, u
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
>      {
> -      if (threads_l2 > 0)
> -       shared_per_thread += core / threads_l2;
> +      long int core_per_thread = threads_l2 > 0 ? (core / threads_l2) : core;
> +      shared_per_thread += core_per_thread;
>        shared += core;
>      }
>
> @@ -745,8 +745,8 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>
>    /* The default setting for the non_temporal threshold is [1/8, 1/2] of size
>       of the chip's cache (depending on `cachesize_non_temporal_divisor` which
> -     is microarch specific. The default is 1/4). For most Intel and AMD
> -     processors with an initial release date between 2017 and 2023, a thread's
> +     is microarch specific. The default is 1/4). For most Intel processors
> +     with an initial release date between 2017 and 2023, a thread's
>       typical share of the cache is from 18-64MB. Using a reasonable size
>       fraction of L3 is meant to estimate the point where non-temporal stores
>       begin out-competing REP MOVSB. As well the point where the fact that
> @@ -757,12 +757,21 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>       the maximum thrashing capped at 1/associativity. */
>    unsigned long int non_temporal_threshold
>        = shared / cachesize_non_temporal_divisor;
> +
> +  /* If the computed non_temporal_threshold <= 3/4 * per-thread L3, we most
> +     likely have incorrect/incomplete cache info in which case, default to
> +     3/4 * per-thread L3 to avoid regressions.  */
> +  unsigned long int non_temporal_threshold_lowbound
> +      = shared_per_thread * 3 / 4;
> +  if (non_temporal_threshold < non_temporal_threshold_lowbound)
> +    non_temporal_threshold = non_temporal_threshold_lowbound;
> +
>    /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
>       a higher risk of actually thrashing the cache as they don't have a HW LRU
>       hint. As well, their performance in highly parallel situations is
>       noticeably worse.  */
>    if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> -    non_temporal_threshold = shared_per_thread * 3 / 4;
> +    non_temporal_threshold = non_temporal_threshold_lowbound;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> --
> 2.34.1
>
@Sajan Karumanchi  this okay?
  

Patch

diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index c98fa57a7b..cd4d0351ae 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -614,8 +614,8 @@  get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, u
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
     {
-      if (threads_l2 > 0)
-	shared_per_thread += core / threads_l2;
+      long int core_per_thread = threads_l2 > 0 ? (core / threads_l2) : core;
+      shared_per_thread += core_per_thread;
       shared += core;
     }
 
@@ -745,8 +745,8 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
 
   /* The default setting for the non_temporal threshold is [1/8, 1/2] of size
      of the chip's cache (depending on `cachesize_non_temporal_divisor` which
-     is microarch specific. The default is 1/4). For most Intel and AMD
-     processors with an initial release date between 2017 and 2023, a thread's
+     is microarch specific. The default is 1/4). For most Intel processors
+     with an initial release date between 2017 and 2023, a thread's
      typical share of the cache is from 18-64MB. Using a reasonable size
      fraction of L3 is meant to estimate the point where non-temporal stores
      begin out-competing REP MOVSB. As well the point where the fact that
@@ -757,12 +757,21 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
      the maximum thrashing capped at 1/associativity. */
   unsigned long int non_temporal_threshold
       = shared / cachesize_non_temporal_divisor;
+
+  /* If the computed non_temporal_threshold <= 3/4 * per-thread L3, we most
+     likely have incorrect/incomplete cache info in which case, default to
+     3/4 * per-thread L3 to avoid regressions.  */
+  unsigned long int non_temporal_threshold_lowbound
+      = shared_per_thread * 3 / 4;
+  if (non_temporal_threshold < non_temporal_threshold_lowbound)
+    non_temporal_threshold = non_temporal_threshold_lowbound;
+
   /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
      a higher risk of actually thrashing the cache as they don't have a HW LRU
      hint. As well, their performance in highly parallel situations is
      noticeably worse.  */
   if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS))
-    non_temporal_threshold = shared_per_thread * 3 / 4;
+    non_temporal_threshold = non_temporal_threshold_lowbound;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the