[v2] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`

Message ID 20230424223045.2066606-1-goldstein.w.n@gmail.com
State Superseded
Headers
Series [v2] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` |

Checks

Context Check Description
dj/TryBot-apply_patch success Patch applied to master at the time it was sent
dj/TryBot-32bit success Build for i686

Commit Message

Noah Goldstein April 24, 2023, 10:30 p.m. UTC
  Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 2`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal threshholds and leads to using non-temporal stores in
cases where `rep movsb` is multiple times faster.

Furthermore, non-temporal stores are written directly to disk so using
it at a size much smaller than L3 can place soon to be accessed data
much further away than it otherwise could be. As well, modern machines
are able to detect streaming patterns (especially if `rep movsb` is
used) and provide LRU hints to the memory subsystem. This in affect
caps the total amount of eviction at 1/cache_assosiativity, far below
meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be `rep movsb` which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, `rep movsb` is ~2x
faster up to `sizeof_L3`.

Because there are still valid concerns about performance of large
memcpy's using cacheable stores (both direct performance and on the
system), if `rep movsb` is not available this patch also introduces a
new tunable: `__x86_shared_non_temporal_threshold_no_erms` that will
continue to use the old calculation and be used if no ERMS memcpy is
supported by the target.

Benchmarks comparing non-temporal stores, rep movsb, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
---
 manual/tunables.texi                          | 16 +++-
 sysdeps/x86/cacheinfo.h                       |  8 +-
 sysdeps/x86/dl-cacheinfo.h                    | 85 +++++++++++++------
 sysdeps/x86/dl-diagnostics-cpu.c              |  2 +
 sysdeps/x86/dl-tunables.list                  |  3 +
 sysdeps/x86/include/cpu-features.h            |  4 +-
 .../multiarch/memmove-vec-unaligned-erms.S    | 12 ++-
 7 files changed, 98 insertions(+), 32 deletions(-)
  

Comments

H.J. Lu April 24, 2023, 10:48 p.m. UTC | #1
On Mon, Apr 24, 2023 at 3:30 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> ncores_per_socket'. This patch updates that value to roughly
> 'sizeof_L3 / 2`
>
> The original value (specifically dividing the `ncores_per_socket`) was
> done to limit the amount of other threads' data a `memcpy`/`memset`
> could evict.
>
> Dividing by 'ncores_per_socket', however leads to exceedingly low
> non-temporal threshholds and leads to using non-temporal stores in
> cases where `rep movsb` is multiple times faster.
>
> Furthermore, non-temporal stores are written directly to disk so using
> it at a size much smaller than L3 can place soon to be accessed data
> much further away than it otherwise could be. As well, modern machines
> are able to detect streaming patterns (especially if `rep movsb` is
> used) and provide LRU hints to the memory subsystem. This in affect
> caps the total amount of eviction at 1/cache_assosiativity, far below
> meaningfully thrashing the entire cache.
>
> As best I can tell, the benchmarks that lead this small threshold
> where done comparing non-temporal stores versus standard cacheable
> stores. A better comparison (linked below) is to be `rep movsb` which,
> on the measure systems, is nearly 2x faster than non-temporal stores
> at the low-end of the previous threshold, and within 10% for over
> 100MB copies (well past even the current threshold). In cases with a
> low number of threads competing for bandwidth, `rep movsb` is ~2x
> faster up to `sizeof_L3`.
>
> Because there are still valid concerns about performance of large
> memcpy's using cacheable stores (both direct performance and on the
> system), if `rep movsb` is not available this patch also introduces a
> new tunable: `__x86_shared_non_temporal_threshold_no_erms` that will
> continue to use the old calculation and be used if no ERMS memcpy is
> supported by the target.
>
> Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> stores where done using:
> https://github.com/goldsteinn/memcpy-nt-benchmarks
>
> Sheets results (also available in pdf on the github):
> https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> ---
>  manual/tunables.texi                          | 16 +++-
>  sysdeps/x86/cacheinfo.h                       |  8 +-
>  sysdeps/x86/dl-cacheinfo.h                    | 85 +++++++++++++------
>  sysdeps/x86/dl-diagnostics-cpu.c              |  2 +
>  sysdeps/x86/dl-tunables.list                  |  3 +
>  sysdeps/x86/include/cpu-features.h            |  4 +-
>  .../multiarch/memmove-vec-unaligned-erms.S    | 12 ++-
>  7 files changed, 98 insertions(+), 32 deletions(-)
>
> diff --git a/manual/tunables.texi b/manual/tunables.texi
> index 130f94b2bc..8320e724f0 100644
> --- a/manual/tunables.texi
> +++ b/manual/tunables.texi
> @@ -52,6 +52,7 @@ glibc.elision.skip_lock_busy: 3 (min: 0, max: 2147483647)
>  glibc.malloc.top_pad: 0x20000 (min: 0x0, max: 0xffffffffffffffff)
>  glibc.cpu.x86_rep_stosb_threshold: 0x800 (min: 0x1, max: 0xffffffffffffffff)
>  glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
> +glibc.cpu.x86_non_temporal_threshold_no_erms: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)

We don't need this.   We can use

if (CPU_FEATURE_USABLE_P (cpu_features, ERMS))

to check for ERMS processors.

>  glibc.cpu.x86_shstk:
>  glibc.pthread.stack_cache_size: 0x2800000 (min: 0x0, max: 0xffffffffffffffff)
>  glibc.cpu.hwcap_mask: 0x6 (min: 0x0, max: 0xffffffffffffffff)
> @@ -486,7 +487,8 @@ thread stack originally backup by Huge Pages to default pages.
>  @cindex shared_cache_size tunables
>  @cindex tunables, shared_cache_size
>  @cindex non_temporal_threshold tunables
> -@cindex tunables, non_temporal_threshold
> +@cindex non_temporal_threshold tunables_no_erms
> +@cindex tunables, non_temporal_threshold, non_temporal_threshold_no_erms
>
>  @deftp {Tunable namespace} glibc.cpu
>  Behavior of @theglibc{} can be tuned to assume specific hardware capabilities
> @@ -559,6 +561,18 @@ like memmove and memcpy.
>  This tunable is specific to i386 and x86-64.
>  @end deftp
>
> +@deftp Tunable glibc.cpu.x86_non_temporal_threshold_no_erms
> +The @code{glibc.cpu.x86_non_temporal_threshold_no_erms} is similiar to
> +the above, but is used specifically when the ERMS feature is not
> +available. ERMS function are often implemented with optimizations for
> +large streaming workloads. This often makes it a better choice than
> +non-temporal stores for a wider-range of values. When ERMS is not
> +available, however, non-temporal stores become preferable at a much
> +lower threshold.
> +
> +This tunable is specific to i386 and x86-64.
> +@end deftp
> +
>  @deftp Tunable glibc.cpu.x86_rep_movsb_threshold
>  The @code{glibc.cpu.x86_rep_movsb_threshold} tunable allows the user to
>  set threshold in bytes to start using "rep movsb".  The value must be
> diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> index ec1bc142c4..1083bd6018 100644
> --- a/sysdeps/x86/cacheinfo.h
> +++ b/sysdeps/x86/cacheinfo.h
> @@ -35,9 +35,12 @@ long int __x86_data_cache_size attribute_hidden = 32 * 1024;
>  long int __x86_shared_cache_size_half attribute_hidden = 1024 * 1024 / 2;
>  long int __x86_shared_cache_size attribute_hidden = 1024 * 1024;
>
> -/* Threshold to use non temporal store.  */
> +/* Threshold to use non temporal store if ERMS is available.  */
>  long int __x86_shared_non_temporal_threshold attribute_hidden;
>
> +/* Threshold to use non temporal store if ERMS is not available.  */
> +long int __x86_shared_non_temporal_threshold_no_erms attribute_hidden;
> +
>  /* Threshold to use Enhanced REP MOVSB.  */
>  long int __x86_rep_movsb_threshold attribute_hidden = 2048;
>
> @@ -77,6 +80,9 @@ init_cacheinfo (void)
>    __x86_shared_non_temporal_threshold
>      = cpu_features->non_temporal_threshold;
>
> +  __x86_shared_non_temporal_threshold_no_erms
> +      = cpu_features->non_temporal_threshold_no_erms;
> +
>    __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
>    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
>    __x86_rep_movsb_stop_threshold =  cpu_features->rep_movsb_stop_threshold;
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index ec88945b39..94d5c6183a 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
>  }
>
>  static void
> -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
>                  long int core)
>  {
>    unsigned int eax;
> @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>    unsigned int family = cpu_features->basic.family;
>    unsigned int model = cpu_features->basic.model;
>    long int shared = *shared_ptr;
> +  long int shared_per_thread = *shared_per_thread_ptr;
>    unsigned int threads = *threads_ptr;
>    bool inclusive_cache = true;
>    bool support_count_mask = true;
> @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>        /* Try L2 otherwise.  */
>        level  = 2;
>        shared = core;
> +      shared_per_thread = core;
>        threads_l2 = 0;
>        threads_l3 = -1;
>      }
> @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
>          }
>        else
>          {
> -intel_bug_no_cache_info:
> -          /* Assume that all logical threads share the highest cache
> -             level.  */
> -          threads
> -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> -              & 0xff);
> -        }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
> +       intel_bug_no_cache_info:
> +         /* Assume that all logical threads share the highest cache
> +            level.  */
> +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> +                    & 0xff);
> +
> +         /* Get per-thread size of highest level cache.  */
> +         if (shared_per_thread > 0 && threads > 0)
> +           shared_per_thread /= threads;
> +       }
>      }
>
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
>      {
>        if (threads_l2 > 0)
> -        core /= threads_l2;
> +       shared_per_thread += core / threads_l2;
>        shared += core;
>      }
>
>    *shared_ptr = shared;
> +  *shared_per_thread_ptr = shared_per_thread;
>    *threads_ptr = threads;
>  }
>
> @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    /* Find out what brand of processor.  */
>    long int data = -1;
>    long int shared = -1;
> +  long int shared_per_thread = -1;
>    long int core = -1;
>    unsigned int threads = 0;
>    unsigned long int level1_icache_size = -1;
> @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
>        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
>        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> +      shared_per_thread = shared;
>
>        level1_icache_size
>         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level4_cache_size
>         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
>      {
>        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
>        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
>
> -      get_common_cache_info (&shared, &threads, core);
> +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
>      }
>    else if (cpu_features->basic.kind == arch_kind_amd)
>      {
>        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
>        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
>        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> +      shared_per_thread = shared;
>
>        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
>        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        if (shared <= 0)
>          /* No shared L3 cache.  All we have is the L2 cache.  */
>         shared = core;
> +
> +      if (shared_per_thread <= 0)
> +       shared_per_thread = shared;
>      }
>
>    cpu_features->level1_icache_size = level1_icache_size;
> @@ -730,17 +738,24 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> +  /* The default setting for the non_temporal threshold is 1/2 of size
> +     of chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> +     estimate the point where non-temporal stores begin outcompeting
> +     other methods. As well the point where the fact that non-temporal
> +     stores are forced back to disk would already occured to the
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the the maximum thrashing
> +     capped at 1/assosiativity. */
> +  unsigned long int non_temporal_threshold = shared / 2;
> +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> +     hint. As well, there performance in highly parallel situations is
> +     noticeably worse.  */
> +  unsigned long int non_temporal_threshold_no_erms = shared_per_thread * 3 / 4;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> @@ -754,6 +769,11 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    else if (non_temporal_threshold > maximum_non_temporal_threshold)
>      non_temporal_threshold = maximum_non_temporal_threshold;
>
> +  if (non_temporal_threshold_no_erms < minimum_non_temporal_threshold)
> +    non_temporal_threshold_no_erms = minimum_non_temporal_threshold;
> +  else if (non_temporal_threshold_no_erms > maximum_non_temporal_threshold)
> +    non_temporal_threshold_no_erms = maximum_non_temporal_threshold;
> +
>    /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8.  */
>    unsigned int minimum_rep_movsb_threshold;
>    /* NB: The default REP MOVSB threshold is 4096 * (VEC_SIZE / 16) for
> @@ -802,6 +822,12 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>        && tunable_size <= maximum_non_temporal_threshold)
>      non_temporal_threshold = tunable_size;
>
> +  tunable_size
> +      = TUNABLE_GET (x86_non_temporal_threshold_no_erms, long int, NULL);
> +  if (tunable_size > minimum_non_temporal_threshold
> +      && tunable_size <= maximum_non_temporal_threshold)
> +    non_temporal_threshold_no_erms = tunable_size;
> +
>    tunable_size = TUNABLE_GET (x86_rep_movsb_threshold, long int, NULL);
>    if (tunable_size > minimum_rep_movsb_threshold)
>      rep_movsb_threshold = tunable_size;
> @@ -817,6 +843,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_threshold,
>                            minimum_non_temporal_threshold,
>                            maximum_non_temporal_threshold);
> +  TUNABLE_SET_WITH_BOUNDS (
> +      x86_non_temporal_threshold_no_erms, non_temporal_threshold_no_erms,
> +      minimum_non_temporal_threshold, maximum_non_temporal_threshold);
>    TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshold,
>                            minimum_rep_movsb_threshold, SIZE_MAX);
>    TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshold, 1,
> @@ -837,6 +866,8 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->data_cache_size = data;
>    cpu_features->shared_cache_size = shared;
>    cpu_features->non_temporal_threshold = non_temporal_threshold;
> +  cpu_features->non_temporal_threshold_no_erms
> +      = non_temporal_threshold_no_erms;
>    cpu_features->rep_movsb_threshold = rep_movsb_threshold;
>    cpu_features->rep_stosb_threshold = rep_stosb_threshold;
>    cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
> diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c
> index a1578e4665..5c09472a10 100644
> --- a/sysdeps/x86/dl-diagnostics-cpu.c
> +++ b/sysdeps/x86/dl-diagnostics-cpu.c
> @@ -83,6 +83,8 @@ _dl_diagnostics_cpu (void)
>                              cpu_features->shared_cache_size);
>    print_cpu_features_value ("non_temporal_threshold",
>                              cpu_features->non_temporal_threshold);
> +  print_cpu_features_value ("non_temporal_threshold_no_erms",
> +                           cpu_features->non_temporal_threshold_no_erms);
>    print_cpu_features_value ("rep_movsb_threshold",
>                              cpu_features->rep_movsb_threshold);
>    print_cpu_features_value ("rep_movsb_stop_threshold",
> diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list
> index feb7004036..aac6341716 100644
> --- a/sysdeps/x86/dl-tunables.list
> +++ b/sysdeps/x86/dl-tunables.list
> @@ -30,6 +30,9 @@ glibc {
>      x86_non_temporal_threshold {
>        type: SIZE_T
>      }
> +    x86_non_temporal_threshold_no_erms {
> +      type: SIZE_T
> +    }
>      x86_rep_movsb_threshold {
>        type: SIZE_T
>        # Since there is overhead to set up REP MOVSB operation, REP
> diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
> index 40b8129d6a..df6c561eac 100644
> --- a/sysdeps/x86/include/cpu-features.h
> +++ b/sysdeps/x86/include/cpu-features.h
> @@ -913,8 +913,10 @@ struct cpu_features
>    /* Shared cache size for use in memory and string routines, typically
>       L2 or L3 size.  */
>    unsigned long int shared_cache_size;
> -  /* Threshold to use non temporal store.  */
> +  /* Threshold to use non temporal store if ERMS is available.  */
>    unsigned long int non_temporal_threshold;
> +  /* Threshold to use non temporal store if ERMS is not available.  */
> +  unsigned long int non_temporal_threshold_no_erms;
>    /* Threshold to use "rep movsb".  */
>    unsigned long int rep_movsb_threshold;
>    /* Threshold to stop using "rep movsb".  */
> diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> index d1b92785b0..856c3daf3b 100644
> --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> @@ -424,8 +424,16 @@ L(more_8x_vec):
>         jb      L(more_8x_vec_backward_check_nop)
>         /* Check if non-temporal move candidate.  */
>  #if (defined USE_MULTIARCH || VEC_SIZE == 16) && IS_IN (libc)
> -       /* Check non-temporal store threshold.  */
> -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> +       /* Check non-temporal store threshold if ERMS is not available.
> +          NB: This path is only hit if we jumped here from L(more_2x_vec).
> +          If we went to L(movsb), then we enter at either the forward loop
> +          directly or go to the backward loop.
> +
> +          WARNING: `__x86_shared_non_temporal_threshold_no_erms` should
> +          NEVER be used in a control flow that could come from
> +          L(movsb_more_2x_vec) without checking checkout
> +          `__x86_rep_movsb_threshold` first.  */
> +       cmp     __x86_shared_non_temporal_threshold_no_erms(%rip), %RDX_LP
>         ja      L(large_memcpy_2x)
>  #endif
>         /* To reach this point there cannot be overlap and dst > src. So
> --
> 2.34.1
>
  
Noah Goldstein April 25, 2023, 2:05 a.m. UTC | #2
On Mon, Apr 24, 2023 at 5:49 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Mon, Apr 24, 2023 at 3:30 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > ncores_per_socket'. This patch updates that value to roughly
> > 'sizeof_L3 / 2`
> >
> > The original value (specifically dividing the `ncores_per_socket`) was
> > done to limit the amount of other threads' data a `memcpy`/`memset`
> > could evict.
> >
> > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > non-temporal threshholds and leads to using non-temporal stores in
> > cases where `rep movsb` is multiple times faster.
> >
> > Furthermore, non-temporal stores are written directly to disk so using
> > it at a size much smaller than L3 can place soon to be accessed data
> > much further away than it otherwise could be. As well, modern machines
> > are able to detect streaming patterns (especially if `rep movsb` is
> > used) and provide LRU hints to the memory subsystem. This in affect
> > caps the total amount of eviction at 1/cache_assosiativity, far below
> > meaningfully thrashing the entire cache.
> >
> > As best I can tell, the benchmarks that lead this small threshold
> > where done comparing non-temporal stores versus standard cacheable
> > stores. A better comparison (linked below) is to be `rep movsb` which,
> > on the measure systems, is nearly 2x faster than non-temporal stores
> > at the low-end of the previous threshold, and within 10% for over
> > 100MB copies (well past even the current threshold). In cases with a
> > low number of threads competing for bandwidth, `rep movsb` is ~2x
> > faster up to `sizeof_L3`.
> >
> > Because there are still valid concerns about performance of large
> > memcpy's using cacheable stores (both direct performance and on the
> > system), if `rep movsb` is not available this patch also introduces a
> > new tunable: `__x86_shared_non_temporal_threshold_no_erms` that will
> > continue to use the old calculation and be used if no ERMS memcpy is
> > supported by the target.
> >
> > Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> > stores where done using:
> > https://github.com/goldsteinn/memcpy-nt-benchmarks
> >
> > Sheets results (also available in pdf on the github):
> > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > ---
> >  manual/tunables.texi                          | 16 +++-
> >  sysdeps/x86/cacheinfo.h                       |  8 +-
> >  sysdeps/x86/dl-cacheinfo.h                    | 85 +++++++++++++------
> >  sysdeps/x86/dl-diagnostics-cpu.c              |  2 +
> >  sysdeps/x86/dl-tunables.list                  |  3 +
> >  sysdeps/x86/include/cpu-features.h            |  4 +-
> >  .../multiarch/memmove-vec-unaligned-erms.S    | 12 ++-
> >  7 files changed, 98 insertions(+), 32 deletions(-)
> >
> > diff --git a/manual/tunables.texi b/manual/tunables.texi
> > index 130f94b2bc..8320e724f0 100644
> > --- a/manual/tunables.texi
> > +++ b/manual/tunables.texi
> > @@ -52,6 +52,7 @@ glibc.elision.skip_lock_busy: 3 (min: 0, max: 2147483647)
> >  glibc.malloc.top_pad: 0x20000 (min: 0x0, max: 0xffffffffffffffff)
> >  glibc.cpu.x86_rep_stosb_threshold: 0x800 (min: 0x1, max: 0xffffffffffffffff)
> >  glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
> > +glibc.cpu.x86_non_temporal_threshold_no_erms: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
>
> We don't need this.   We can use
>
> if (CPU_FEATURE_USABLE_P (cpu_features, ERMS))
>
> to check for ERMS processors.
>

Ah makes sense. Does that work for FSRM as well?
> >  glibc.cpu.x86_shstk:
> >  glibc.pthread.stack_cache_size: 0x2800000 (min: 0x0, max: 0xffffffffffffffff)
> >  glibc.cpu.hwcap_mask: 0x6 (min: 0x0, max: 0xffffffffffffffff)
> > @@ -486,7 +487,8 @@ thread stack originally backup by Huge Pages to default pages.
> >  @cindex shared_cache_size tunables
> >  @cindex tunables, shared_cache_size
> >  @cindex non_temporal_threshold tunables
> > -@cindex tunables, non_temporal_threshold
> > +@cindex non_temporal_threshold tunables_no_erms
> > +@cindex tunables, non_temporal_threshold, non_temporal_threshold_no_erms
> >
> >  @deftp {Tunable namespace} glibc.cpu
> >  Behavior of @theglibc{} can be tuned to assume specific hardware capabilities
> > @@ -559,6 +561,18 @@ like memmove and memcpy.
> >  This tunable is specific to i386 and x86-64.
> >  @end deftp
> >
> > +@deftp Tunable glibc.cpu.x86_non_temporal_threshold_no_erms
> > +The @code{glibc.cpu.x86_non_temporal_threshold_no_erms} is similiar to
> > +the above, but is used specifically when the ERMS feature is not
> > +available. ERMS function are often implemented with optimizations for
> > +large streaming workloads. This often makes it a better choice than
> > +non-temporal stores for a wider-range of values. When ERMS is not
> > +available, however, non-temporal stores become preferable at a much
> > +lower threshold.
> > +
> > +This tunable is specific to i386 and x86-64.
> > +@end deftp
> > +
> >  @deftp Tunable glibc.cpu.x86_rep_movsb_threshold
> >  The @code{glibc.cpu.x86_rep_movsb_threshold} tunable allows the user to
> >  set threshold in bytes to start using "rep movsb".  The value must be
> > diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> > index ec1bc142c4..1083bd6018 100644
> > --- a/sysdeps/x86/cacheinfo.h
> > +++ b/sysdeps/x86/cacheinfo.h
> > @@ -35,9 +35,12 @@ long int __x86_data_cache_size attribute_hidden = 32 * 1024;
> >  long int __x86_shared_cache_size_half attribute_hidden = 1024 * 1024 / 2;
> >  long int __x86_shared_cache_size attribute_hidden = 1024 * 1024;
> >
> > -/* Threshold to use non temporal store.  */
> > +/* Threshold to use non temporal store if ERMS is available.  */
> >  long int __x86_shared_non_temporal_threshold attribute_hidden;
> >
> > +/* Threshold to use non temporal store if ERMS is not available.  */
> > +long int __x86_shared_non_temporal_threshold_no_erms attribute_hidden;
> > +
> >  /* Threshold to use Enhanced REP MOVSB.  */
> >  long int __x86_rep_movsb_threshold attribute_hidden = 2048;
> >
> > @@ -77,6 +80,9 @@ init_cacheinfo (void)
> >    __x86_shared_non_temporal_threshold
> >      = cpu_features->non_temporal_threshold;
> >
> > +  __x86_shared_non_temporal_threshold_no_erms
> > +      = cpu_features->non_temporal_threshold_no_erms;
> > +
> >    __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
> >    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
> >    __x86_rep_movsb_stop_threshold =  cpu_features->rep_movsb_stop_threshold;
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index ec88945b39..94d5c6183a 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> >  }
> >
> >  static void
> > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> >                  long int core)
> >  {
> >    unsigned int eax;
> > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >    unsigned int family = cpu_features->basic.family;
> >    unsigned int model = cpu_features->basic.model;
> >    long int shared = *shared_ptr;
> > +  long int shared_per_thread = *shared_per_thread_ptr;
> >    unsigned int threads = *threads_ptr;
> >    bool inclusive_cache = true;
> >    bool support_count_mask = true;
> > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >        /* Try L2 otherwise.  */
> >        level  = 2;
> >        shared = core;
> > +      shared_per_thread = core;
> >        threads_l2 = 0;
> >        threads_l3 = -1;
> >      }
> > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> >          }
> >        else
> >          {
> > -intel_bug_no_cache_info:
> > -          /* Assume that all logical threads share the highest cache
> > -             level.  */
> > -          threads
> > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > -              & 0xff);
> > -        }
> > -
> > -        /* Cap usage of highest cache level to the number of supported
> > -           threads.  */
> > -        if (shared > 0 && threads > 0)
> > -          shared /= threads;
> > +       intel_bug_no_cache_info:
> > +         /* Assume that all logical threads share the highest cache
> > +            level.  */
> > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > +                    & 0xff);
> > +
> > +         /* Get per-thread size of highest level cache.  */
> > +         if (shared_per_thread > 0 && threads > 0)
> > +           shared_per_thread /= threads;
> > +       }
> >      }
> >
> >    /* Account for non-inclusive L2 and L3 caches.  */
> >    if (!inclusive_cache)
> >      {
> >        if (threads_l2 > 0)
> > -        core /= threads_l2;
> > +       shared_per_thread += core / threads_l2;
> >        shared += core;
> >      }
> >
> >    *shared_ptr = shared;
> > +  *shared_per_thread_ptr = shared_per_thread;
> >    *threads_ptr = threads;
> >  }
> >
> > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    /* Find out what brand of processor.  */
> >    long int data = -1;
> >    long int shared = -1;
> > +  long int shared_per_thread = -1;
> >    long int core = -1;
> >    unsigned int threads = 0;
> >    unsigned long int level1_icache_size = -1;
> > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size
> >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level4_cache_size
> >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> >      {
> >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> >
> > -      get_common_cache_info (&shared, &threads, core);
> > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> >      }
> >    else if (cpu_features->basic.kind == arch_kind_amd)
> >      {
> >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > +      shared_per_thread = shared;
> >
> >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        if (shared <= 0)
> >          /* No shared L3 cache.  All we have is the L2 cache.  */
> >         shared = core;
> > +
> > +      if (shared_per_thread <= 0)
> > +       shared_per_thread = shared;
> >      }
> >
> >    cpu_features->level1_icache_size = level1_icache_size;
> > @@ -730,17 +738,24 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> >    cpu_features->level4_cache_size = level4_cache_size;
> >
> > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > -     thread's share of the chip's cache. For most Intel and AMD processors
> > -     with an initial release date between 2017 and 2020, a thread's typical
> > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > -     in cache after a maximum temporal copy, which will maintain
> > -     in cache a reasonable portion of the thread's stack and other
> > -     active data. If the threshold is set higher than one thread's
> > -     share of the cache, it has a substantial risk of negatively
> > -     impacting the performance of other threads running on the chip. */
> > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > +     of chip's cache. For most Intel and AMD processors with an
> > +     initial release date between 2017 and 2023, a thread's typical
> > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > +     estimate the point where non-temporal stores begin outcompeting
> > +     other methods. As well the point where the fact that non-temporal
> > +     stores are forced back to disk would already occured to the
> > +     majority of the lines in the copy. Note, concerns about the
> > +     entire L3 cache being evicted by the copy are mostly alleviated
> > +     by the fact that modern HW detects streaming patterns and
> > +     provides proper LRU hints so that the the maximum thrashing
> > +     capped at 1/assosiativity. */
> > +  unsigned long int non_temporal_threshold = shared / 2;
> > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > +     hint. As well, there performance in highly parallel situations is
> > +     noticeably worse.  */
> > +  unsigned long int non_temporal_threshold_no_erms = shared_per_thread * 3 / 4;
> >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > @@ -754,6 +769,11 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    else if (non_temporal_threshold > maximum_non_temporal_threshold)
> >      non_temporal_threshold = maximum_non_temporal_threshold;
> >
> > +  if (non_temporal_threshold_no_erms < minimum_non_temporal_threshold)
> > +    non_temporal_threshold_no_erms = minimum_non_temporal_threshold;
> > +  else if (non_temporal_threshold_no_erms > maximum_non_temporal_threshold)
> > +    non_temporal_threshold_no_erms = maximum_non_temporal_threshold;
> > +
> >    /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8.  */
> >    unsigned int minimum_rep_movsb_threshold;
> >    /* NB: The default REP MOVSB threshold is 4096 * (VEC_SIZE / 16) for
> > @@ -802,6 +822,12 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >        && tunable_size <= maximum_non_temporal_threshold)
> >      non_temporal_threshold = tunable_size;
> >
> > +  tunable_size
> > +      = TUNABLE_GET (x86_non_temporal_threshold_no_erms, long int, NULL);
> > +  if (tunable_size > minimum_non_temporal_threshold
> > +      && tunable_size <= maximum_non_temporal_threshold)
> > +    non_temporal_threshold_no_erms = tunable_size;
> > +
> >    tunable_size = TUNABLE_GET (x86_rep_movsb_threshold, long int, NULL);
> >    if (tunable_size > minimum_rep_movsb_threshold)
> >      rep_movsb_threshold = tunable_size;
> > @@ -817,6 +843,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_threshold,
> >                            minimum_non_temporal_threshold,
> >                            maximum_non_temporal_threshold);
> > +  TUNABLE_SET_WITH_BOUNDS (
> > +      x86_non_temporal_threshold_no_erms, non_temporal_threshold_no_erms,
> > +      minimum_non_temporal_threshold, maximum_non_temporal_threshold);
> >    TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshold,
> >                            minimum_rep_movsb_threshold, SIZE_MAX);
> >    TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshold, 1,
> > @@ -837,6 +866,8 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->data_cache_size = data;
> >    cpu_features->shared_cache_size = shared;
> >    cpu_features->non_temporal_threshold = non_temporal_threshold;
> > +  cpu_features->non_temporal_threshold_no_erms
> > +      = non_temporal_threshold_no_erms;
> >    cpu_features->rep_movsb_threshold = rep_movsb_threshold;
> >    cpu_features->rep_stosb_threshold = rep_stosb_threshold;
> >    cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
> > diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c
> > index a1578e4665..5c09472a10 100644
> > --- a/sysdeps/x86/dl-diagnostics-cpu.c
> > +++ b/sysdeps/x86/dl-diagnostics-cpu.c
> > @@ -83,6 +83,8 @@ _dl_diagnostics_cpu (void)
> >                              cpu_features->shared_cache_size);
> >    print_cpu_features_value ("non_temporal_threshold",
> >                              cpu_features->non_temporal_threshold);
> > +  print_cpu_features_value ("non_temporal_threshold_no_erms",
> > +                           cpu_features->non_temporal_threshold_no_erms);
> >    print_cpu_features_value ("rep_movsb_threshold",
> >                              cpu_features->rep_movsb_threshold);
> >    print_cpu_features_value ("rep_movsb_stop_threshold",
> > diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list
> > index feb7004036..aac6341716 100644
> > --- a/sysdeps/x86/dl-tunables.list
> > +++ b/sysdeps/x86/dl-tunables.list
> > @@ -30,6 +30,9 @@ glibc {
> >      x86_non_temporal_threshold {
> >        type: SIZE_T
> >      }
> > +    x86_non_temporal_threshold_no_erms {
> > +      type: SIZE_T
> > +    }
> >      x86_rep_movsb_threshold {
> >        type: SIZE_T
> >        # Since there is overhead to set up REP MOVSB operation, REP
> > diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
> > index 40b8129d6a..df6c561eac 100644
> > --- a/sysdeps/x86/include/cpu-features.h
> > +++ b/sysdeps/x86/include/cpu-features.h
> > @@ -913,8 +913,10 @@ struct cpu_features
> >    /* Shared cache size for use in memory and string routines, typically
> >       L2 or L3 size.  */
> >    unsigned long int shared_cache_size;
> > -  /* Threshold to use non temporal store.  */
> > +  /* Threshold to use non temporal store if ERMS is available.  */
> >    unsigned long int non_temporal_threshold;
> > +  /* Threshold to use non temporal store if ERMS is not available.  */
> > +  unsigned long int non_temporal_threshold_no_erms;
> >    /* Threshold to use "rep movsb".  */
> >    unsigned long int rep_movsb_threshold;
> >    /* Threshold to stop using "rep movsb".  */
> > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > index d1b92785b0..856c3daf3b 100644
> > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > @@ -424,8 +424,16 @@ L(more_8x_vec):
> >         jb      L(more_8x_vec_backward_check_nop)
> >         /* Check if non-temporal move candidate.  */
> >  #if (defined USE_MULTIARCH || VEC_SIZE == 16) && IS_IN (libc)
> > -       /* Check non-temporal store threshold.  */
> > -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> > +       /* Check non-temporal store threshold if ERMS is not available.
> > +          NB: This path is only hit if we jumped here from L(more_2x_vec).
> > +          If we went to L(movsb), then we enter at either the forward loop
> > +          directly or go to the backward loop.
> > +
> > +          WARNING: `__x86_shared_non_temporal_threshold_no_erms` should
> > +          NEVER be used in a control flow that could come from
> > +          L(movsb_more_2x_vec) without checking checkout
> > +          `__x86_rep_movsb_threshold` first.  */
> > +       cmp     __x86_shared_non_temporal_threshold_no_erms(%rip), %RDX_LP
> >         ja      L(large_memcpy_2x)
> >  #endif
> >         /* To reach this point there cannot be overlap and dst > src. So
> > --
> > 2.34.1
> >
>
>
> --
> H.J.
  
H.J. Lu April 25, 2023, 2:55 a.m. UTC | #3
On Mon, Apr 24, 2023 at 7:05 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Mon, Apr 24, 2023 at 5:49 PM H.J. Lu <hjl.tools@gmail.com> wrote:
> >
> > On Mon, Apr 24, 2023 at 3:30 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > ncores_per_socket'. This patch updates that value to roughly
> > > 'sizeof_L3 / 2`
> > >
> > > The original value (specifically dividing the `ncores_per_socket`) was
> > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > could evict.
> > >
> > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > non-temporal threshholds and leads to using non-temporal stores in
> > > cases where `rep movsb` is multiple times faster.
> > >
> > > Furthermore, non-temporal stores are written directly to disk so using
> > > it at a size much smaller than L3 can place soon to be accessed data
> > > much further away than it otherwise could be. As well, modern machines
> > > are able to detect streaming patterns (especially if `rep movsb` is
> > > used) and provide LRU hints to the memory subsystem. This in affect
> > > caps the total amount of eviction at 1/cache_assosiativity, far below
> > > meaningfully thrashing the entire cache.
> > >
> > > As best I can tell, the benchmarks that lead this small threshold
> > > where done comparing non-temporal stores versus standard cacheable
> > > stores. A better comparison (linked below) is to be `rep movsb` which,
> > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > at the low-end of the previous threshold, and within 10% for over
> > > 100MB copies (well past even the current threshold). In cases with a
> > > low number of threads competing for bandwidth, `rep movsb` is ~2x
> > > faster up to `sizeof_L3`.
> > >
> > > Because there are still valid concerns about performance of large
> > > memcpy's using cacheable stores (both direct performance and on the
> > > system), if `rep movsb` is not available this patch also introduces a
> > > new tunable: `__x86_shared_non_temporal_threshold_no_erms` that will
> > > continue to use the old calculation and be used if no ERMS memcpy is
> > > supported by the target.
> > >
> > > Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> > > stores where done using:
> > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > >
> > > Sheets results (also available in pdf on the github):
> > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > ---
> > >  manual/tunables.texi                          | 16 +++-
> > >  sysdeps/x86/cacheinfo.h                       |  8 +-
> > >  sysdeps/x86/dl-cacheinfo.h                    | 85 +++++++++++++------
> > >  sysdeps/x86/dl-diagnostics-cpu.c              |  2 +
> > >  sysdeps/x86/dl-tunables.list                  |  3 +
> > >  sysdeps/x86/include/cpu-features.h            |  4 +-
> > >  .../multiarch/memmove-vec-unaligned-erms.S    | 12 ++-
> > >  7 files changed, 98 insertions(+), 32 deletions(-)
> > >
> > > diff --git a/manual/tunables.texi b/manual/tunables.texi
> > > index 130f94b2bc..8320e724f0 100644
> > > --- a/manual/tunables.texi
> > > +++ b/manual/tunables.texi
> > > @@ -52,6 +52,7 @@ glibc.elision.skip_lock_busy: 3 (min: 0, max: 2147483647)
> > >  glibc.malloc.top_pad: 0x20000 (min: 0x0, max: 0xffffffffffffffff)
> > >  glibc.cpu.x86_rep_stosb_threshold: 0x800 (min: 0x1, max: 0xffffffffffffffff)
> > >  glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
> > > +glibc.cpu.x86_non_temporal_threshold_no_erms: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
> >
> > We don't need this.   We can use
> >
> > if (CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> >
> > to check for ERMS processors.
> >
>
> Ah makes sense. Does that work for FSRM as well?

All FSRM processors are also ERMS processors.  In any case, memcpy
checks ERMS, not FSRM.

> > >  glibc.cpu.x86_shstk:
> > >  glibc.pthread.stack_cache_size: 0x2800000 (min: 0x0, max: 0xffffffffffffffff)
> > >  glibc.cpu.hwcap_mask: 0x6 (min: 0x0, max: 0xffffffffffffffff)
> > > @@ -486,7 +487,8 @@ thread stack originally backup by Huge Pages to default pages.
> > >  @cindex shared_cache_size tunables
> > >  @cindex tunables, shared_cache_size
> > >  @cindex non_temporal_threshold tunables
> > > -@cindex tunables, non_temporal_threshold
> > > +@cindex non_temporal_threshold tunables_no_erms
> > > +@cindex tunables, non_temporal_threshold, non_temporal_threshold_no_erms
> > >
> > >  @deftp {Tunable namespace} glibc.cpu
> > >  Behavior of @theglibc{} can be tuned to assume specific hardware capabilities
> > > @@ -559,6 +561,18 @@ like memmove and memcpy.
> > >  This tunable is specific to i386 and x86-64.
> > >  @end deftp
> > >
> > > +@deftp Tunable glibc.cpu.x86_non_temporal_threshold_no_erms
> > > +The @code{glibc.cpu.x86_non_temporal_threshold_no_erms} is similiar to
> > > +the above, but is used specifically when the ERMS feature is not
> > > +available. ERMS function are often implemented with optimizations for
> > > +large streaming workloads. This often makes it a better choice than
> > > +non-temporal stores for a wider-range of values. When ERMS is not
> > > +available, however, non-temporal stores become preferable at a much
> > > +lower threshold.
> > > +
> > > +This tunable is specific to i386 and x86-64.
> > > +@end deftp
> > > +
> > >  @deftp Tunable glibc.cpu.x86_rep_movsb_threshold
> > >  The @code{glibc.cpu.x86_rep_movsb_threshold} tunable allows the user to
> > >  set threshold in bytes to start using "rep movsb".  The value must be
> > > diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> > > index ec1bc142c4..1083bd6018 100644
> > > --- a/sysdeps/x86/cacheinfo.h
> > > +++ b/sysdeps/x86/cacheinfo.h
> > > @@ -35,9 +35,12 @@ long int __x86_data_cache_size attribute_hidden = 32 * 1024;
> > >  long int __x86_shared_cache_size_half attribute_hidden = 1024 * 1024 / 2;
> > >  long int __x86_shared_cache_size attribute_hidden = 1024 * 1024;
> > >
> > > -/* Threshold to use non temporal store.  */
> > > +/* Threshold to use non temporal store if ERMS is available.  */
> > >  long int __x86_shared_non_temporal_threshold attribute_hidden;
> > >
> > > +/* Threshold to use non temporal store if ERMS is not available.  */
> > > +long int __x86_shared_non_temporal_threshold_no_erms attribute_hidden;
> > > +
> > >  /* Threshold to use Enhanced REP MOVSB.  */
> > >  long int __x86_rep_movsb_threshold attribute_hidden = 2048;
> > >
> > > @@ -77,6 +80,9 @@ init_cacheinfo (void)
> > >    __x86_shared_non_temporal_threshold
> > >      = cpu_features->non_temporal_threshold;
> > >
> > > +  __x86_shared_non_temporal_threshold_no_erms
> > > +      = cpu_features->non_temporal_threshold_no_erms;
> > > +
> > >    __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
> > >    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
> > >    __x86_rep_movsb_stop_threshold =  cpu_features->rep_movsb_stop_threshold;
> > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > index ec88945b39..94d5c6183a 100644
> > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > >  }
> > >
> > >  static void
> > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > >                  long int core)
> > >  {
> > >    unsigned int eax;
> > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >    unsigned int family = cpu_features->basic.family;
> > >    unsigned int model = cpu_features->basic.model;
> > >    long int shared = *shared_ptr;
> > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > >    unsigned int threads = *threads_ptr;
> > >    bool inclusive_cache = true;
> > >    bool support_count_mask = true;
> > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >        /* Try L2 otherwise.  */
> > >        level  = 2;
> > >        shared = core;
> > > +      shared_per_thread = core;
> > >        threads_l2 = 0;
> > >        threads_l3 = -1;
> > >      }
> > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > >          }
> > >        else
> > >          {
> > > -intel_bug_no_cache_info:
> > > -          /* Assume that all logical threads share the highest cache
> > > -             level.  */
> > > -          threads
> > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > -              & 0xff);
> > > -        }
> > > -
> > > -        /* Cap usage of highest cache level to the number of supported
> > > -           threads.  */
> > > -        if (shared > 0 && threads > 0)
> > > -          shared /= threads;
> > > +       intel_bug_no_cache_info:
> > > +         /* Assume that all logical threads share the highest cache
> > > +            level.  */
> > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > +                    & 0xff);
> > > +
> > > +         /* Get per-thread size of highest level cache.  */
> > > +         if (shared_per_thread > 0 && threads > 0)
> > > +           shared_per_thread /= threads;
> > > +       }
> > >      }
> > >
> > >    /* Account for non-inclusive L2 and L3 caches.  */
> > >    if (!inclusive_cache)
> > >      {
> > >        if (threads_l2 > 0)
> > > -        core /= threads_l2;
> > > +       shared_per_thread += core / threads_l2;
> > >        shared += core;
> > >      }
> > >
> > >    *shared_ptr = shared;
> > > +  *shared_per_thread_ptr = shared_per_thread;
> > >    *threads_ptr = threads;
> > >  }
> > >
> > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    /* Find out what brand of processor.  */
> > >    long int data = -1;
> > >    long int shared = -1;
> > > +  long int shared_per_thread = -1;
> > >    long int core = -1;
> > >    unsigned int threads = 0;
> > >    unsigned long int level1_icache_size = -1;
> > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size
> > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        level4_cache_size
> > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > >
> > > -      get_common_cache_info (&shared, &threads, core);
> > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > >      }
> > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > >      {
> > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > >
> > > -      get_common_cache_info (&shared, &threads, core);
> > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > >      }
> > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > >      {
> > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > +      shared_per_thread = shared;
> > >
> > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        if (shared <= 0)
> > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > >         shared = core;
> > > +
> > > +      if (shared_per_thread <= 0)
> > > +       shared_per_thread = shared;
> > >      }
> > >
> > >    cpu_features->level1_icache_size = level1_icache_size;
> > > @@ -730,17 +738,24 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > >    cpu_features->level4_cache_size = level4_cache_size;
> > >
> > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > -     in cache after a maximum temporal copy, which will maintain
> > > -     in cache a reasonable portion of the thread's stack and other
> > > -     active data. If the threshold is set higher than one thread's
> > > -     share of the cache, it has a substantial risk of negatively
> > > -     impacting the performance of other threads running on the chip. */
> > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > > +     of chip's cache. For most Intel and AMD processors with an
> > > +     initial release date between 2017 and 2023, a thread's typical
> > > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > > +     estimate the point where non-temporal stores begin outcompeting
> > > +     other methods. As well the point where the fact that non-temporal
> > > +     stores are forced back to disk would already occured to the
> > > +     majority of the lines in the copy. Note, concerns about the
> > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > +     by the fact that modern HW detects streaming patterns and
> > > +     provides proper LRU hints so that the the maximum thrashing
> > > +     capped at 1/assosiativity. */
> > > +  unsigned long int non_temporal_threshold = shared / 2;
> > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > +     hint. As well, there performance in highly parallel situations is
> > > +     noticeably worse.  */
> > > +  unsigned long int non_temporal_threshold_no_erms = shared_per_thread * 3 / 4;
> > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > @@ -754,6 +769,11 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    else if (non_temporal_threshold > maximum_non_temporal_threshold)
> > >      non_temporal_threshold = maximum_non_temporal_threshold;
> > >
> > > +  if (non_temporal_threshold_no_erms < minimum_non_temporal_threshold)
> > > +    non_temporal_threshold_no_erms = minimum_non_temporal_threshold;
> > > +  else if (non_temporal_threshold_no_erms > maximum_non_temporal_threshold)
> > > +    non_temporal_threshold_no_erms = maximum_non_temporal_threshold;
> > > +
> > >    /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8.  */
> > >    unsigned int minimum_rep_movsb_threshold;
> > >    /* NB: The default REP MOVSB threshold is 4096 * (VEC_SIZE / 16) for
> > > @@ -802,6 +822,12 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >        && tunable_size <= maximum_non_temporal_threshold)
> > >      non_temporal_threshold = tunable_size;
> > >
> > > +  tunable_size
> > > +      = TUNABLE_GET (x86_non_temporal_threshold_no_erms, long int, NULL);
> > > +  if (tunable_size > minimum_non_temporal_threshold
> > > +      && tunable_size <= maximum_non_temporal_threshold)
> > > +    non_temporal_threshold_no_erms = tunable_size;
> > > +
> > >    tunable_size = TUNABLE_GET (x86_rep_movsb_threshold, long int, NULL);
> > >    if (tunable_size > minimum_rep_movsb_threshold)
> > >      rep_movsb_threshold = tunable_size;
> > > @@ -817,6 +843,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_threshold,
> > >                            minimum_non_temporal_threshold,
> > >                            maximum_non_temporal_threshold);
> > > +  TUNABLE_SET_WITH_BOUNDS (
> > > +      x86_non_temporal_threshold_no_erms, non_temporal_threshold_no_erms,
> > > +      minimum_non_temporal_threshold, maximum_non_temporal_threshold);
> > >    TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshold,
> > >                            minimum_rep_movsb_threshold, SIZE_MAX);
> > >    TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshold, 1,
> > > @@ -837,6 +866,8 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    cpu_features->data_cache_size = data;
> > >    cpu_features->shared_cache_size = shared;
> > >    cpu_features->non_temporal_threshold = non_temporal_threshold;
> > > +  cpu_features->non_temporal_threshold_no_erms
> > > +      = non_temporal_threshold_no_erms;
> > >    cpu_features->rep_movsb_threshold = rep_movsb_threshold;
> > >    cpu_features->rep_stosb_threshold = rep_stosb_threshold;
> > >    cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
> > > diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c
> > > index a1578e4665..5c09472a10 100644
> > > --- a/sysdeps/x86/dl-diagnostics-cpu.c
> > > +++ b/sysdeps/x86/dl-diagnostics-cpu.c
> > > @@ -83,6 +83,8 @@ _dl_diagnostics_cpu (void)
> > >                              cpu_features->shared_cache_size);
> > >    print_cpu_features_value ("non_temporal_threshold",
> > >                              cpu_features->non_temporal_threshold);
> > > +  print_cpu_features_value ("non_temporal_threshold_no_erms",
> > > +                           cpu_features->non_temporal_threshold_no_erms);
> > >    print_cpu_features_value ("rep_movsb_threshold",
> > >                              cpu_features->rep_movsb_threshold);
> > >    print_cpu_features_value ("rep_movsb_stop_threshold",
> > > diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list
> > > index feb7004036..aac6341716 100644
> > > --- a/sysdeps/x86/dl-tunables.list
> > > +++ b/sysdeps/x86/dl-tunables.list
> > > @@ -30,6 +30,9 @@ glibc {
> > >      x86_non_temporal_threshold {
> > >        type: SIZE_T
> > >      }
> > > +    x86_non_temporal_threshold_no_erms {
> > > +      type: SIZE_T
> > > +    }
> > >      x86_rep_movsb_threshold {
> > >        type: SIZE_T
> > >        # Since there is overhead to set up REP MOVSB operation, REP
> > > diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
> > > index 40b8129d6a..df6c561eac 100644
> > > --- a/sysdeps/x86/include/cpu-features.h
> > > +++ b/sysdeps/x86/include/cpu-features.h
> > > @@ -913,8 +913,10 @@ struct cpu_features
> > >    /* Shared cache size for use in memory and string routines, typically
> > >       L2 or L3 size.  */
> > >    unsigned long int shared_cache_size;
> > > -  /* Threshold to use non temporal store.  */
> > > +  /* Threshold to use non temporal store if ERMS is available.  */
> > >    unsigned long int non_temporal_threshold;
> > > +  /* Threshold to use non temporal store if ERMS is not available.  */
> > > +  unsigned long int non_temporal_threshold_no_erms;
> > >    /* Threshold to use "rep movsb".  */
> > >    unsigned long int rep_movsb_threshold;
> > >    /* Threshold to stop using "rep movsb".  */
> > > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > index d1b92785b0..856c3daf3b 100644
> > > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > @@ -424,8 +424,16 @@ L(more_8x_vec):
> > >         jb      L(more_8x_vec_backward_check_nop)
> > >         /* Check if non-temporal move candidate.  */
> > >  #if (defined USE_MULTIARCH || VEC_SIZE == 16) && IS_IN (libc)
> > > -       /* Check non-temporal store threshold.  */
> > > -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> > > +       /* Check non-temporal store threshold if ERMS is not available.
> > > +          NB: This path is only hit if we jumped here from L(more_2x_vec).
> > > +          If we went to L(movsb), then we enter at either the forward loop
> > > +          directly or go to the backward loop.
> > > +
> > > +          WARNING: `__x86_shared_non_temporal_threshold_no_erms` should
> > > +          NEVER be used in a control flow that could come from
> > > +          L(movsb_more_2x_vec) without checking checkout
> > > +          `__x86_rep_movsb_threshold` first.  */
> > > +       cmp     __x86_shared_non_temporal_threshold_no_erms(%rip), %RDX_LP
> > >         ja      L(large_memcpy_2x)
> > >  #endif
> > >         /* To reach this point there cannot be overlap and dst > src. So
> > > --
> > > 2.34.1
> > >
> >
> >
> > --
> > H.J.
  
Noah Goldstein April 25, 2023, 3:43 a.m. UTC | #4
On Mon, Apr 24, 2023 at 9:56 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Mon, Apr 24, 2023 at 7:05 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Mon, Apr 24, 2023 at 5:49 PM H.J. Lu <hjl.tools@gmail.com> wrote:
> > >
> > > On Mon, Apr 24, 2023 at 3:30 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > >
> > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > > ncores_per_socket'. This patch updates that value to roughly
> > > > 'sizeof_L3 / 2`
> > > >
> > > > The original value (specifically dividing the `ncores_per_socket`) was
> > > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > > could evict.
> > > >
> > > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > > non-temporal threshholds and leads to using non-temporal stores in
> > > > cases where `rep movsb` is multiple times faster.
> > > >
> > > > Furthermore, non-temporal stores are written directly to disk so using
> > > > it at a size much smaller than L3 can place soon to be accessed data
> > > > much further away than it otherwise could be. As well, modern machines
> > > > are able to detect streaming patterns (especially if `rep movsb` is
> > > > used) and provide LRU hints to the memory subsystem. This in affect
> > > > caps the total amount of eviction at 1/cache_assosiativity, far below
> > > > meaningfully thrashing the entire cache.
> > > >
> > > > As best I can tell, the benchmarks that lead this small threshold
> > > > where done comparing non-temporal stores versus standard cacheable
> > > > stores. A better comparison (linked below) is to be `rep movsb` which,
> > > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > > at the low-end of the previous threshold, and within 10% for over
> > > > 100MB copies (well past even the current threshold). In cases with a
> > > > low number of threads competing for bandwidth, `rep movsb` is ~2x
> > > > faster up to `sizeof_L3`.
> > > >
> > > > Because there are still valid concerns about performance of large
> > > > memcpy's using cacheable stores (both direct performance and on the
> > > > system), if `rep movsb` is not available this patch also introduces a
> > > > new tunable: `__x86_shared_non_temporal_threshold_no_erms` that will
> > > > continue to use the old calculation and be used if no ERMS memcpy is
> > > > supported by the target.
> > > >
> > > > Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> > > > stores where done using:
> > > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > > >
> > > > Sheets results (also available in pdf on the github):
> > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > > ---
> > > >  manual/tunables.texi                          | 16 +++-
> > > >  sysdeps/x86/cacheinfo.h                       |  8 +-
> > > >  sysdeps/x86/dl-cacheinfo.h                    | 85 +++++++++++++------
> > > >  sysdeps/x86/dl-diagnostics-cpu.c              |  2 +
> > > >  sysdeps/x86/dl-tunables.list                  |  3 +
> > > >  sysdeps/x86/include/cpu-features.h            |  4 +-
> > > >  .../multiarch/memmove-vec-unaligned-erms.S    | 12 ++-
> > > >  7 files changed, 98 insertions(+), 32 deletions(-)
> > > >
> > > > diff --git a/manual/tunables.texi b/manual/tunables.texi
> > > > index 130f94b2bc..8320e724f0 100644
> > > > --- a/manual/tunables.texi
> > > > +++ b/manual/tunables.texi
> > > > @@ -52,6 +52,7 @@ glibc.elision.skip_lock_busy: 3 (min: 0, max: 2147483647)
> > > >  glibc.malloc.top_pad: 0x20000 (min: 0x0, max: 0xffffffffffffffff)
> > > >  glibc.cpu.x86_rep_stosb_threshold: 0x800 (min: 0x1, max: 0xffffffffffffffff)
> > > >  glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
> > > > +glibc.cpu.x86_non_temporal_threshold_no_erms: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
> > >
> > > We don't need this.   We can use
> > >
> > > if (CPU_FEATURE_USABLE_P (cpu_features, ERMS))
> > >
> > > to check for ERMS processors.
> > >
> >
> > Ah makes sense. Does that work for FSRM as well?
>
> All FSRM processors are also ERMS processors.  In any case, memcpy
> checks ERMS, not FSRM.

Fixed.
>
> > > >  glibc.cpu.x86_shstk:
> > > >  glibc.pthread.stack_cache_size: 0x2800000 (min: 0x0, max: 0xffffffffffffffff)
> > > >  glibc.cpu.hwcap_mask: 0x6 (min: 0x0, max: 0xffffffffffffffff)
> > > > @@ -486,7 +487,8 @@ thread stack originally backup by Huge Pages to default pages.
> > > >  @cindex shared_cache_size tunables
> > > >  @cindex tunables, shared_cache_size
> > > >  @cindex non_temporal_threshold tunables
> > > > -@cindex tunables, non_temporal_threshold
> > > > +@cindex non_temporal_threshold tunables_no_erms
> > > > +@cindex tunables, non_temporal_threshold, non_temporal_threshold_no_erms
> > > >
> > > >  @deftp {Tunable namespace} glibc.cpu
> > > >  Behavior of @theglibc{} can be tuned to assume specific hardware capabilities
> > > > @@ -559,6 +561,18 @@ like memmove and memcpy.
> > > >  This tunable is specific to i386 and x86-64.
> > > >  @end deftp
> > > >
> > > > +@deftp Tunable glibc.cpu.x86_non_temporal_threshold_no_erms
> > > > +The @code{glibc.cpu.x86_non_temporal_threshold_no_erms} is similiar to
> > > > +the above, but is used specifically when the ERMS feature is not
> > > > +available. ERMS function are often implemented with optimizations for
> > > > +large streaming workloads. This often makes it a better choice than
> > > > +non-temporal stores for a wider-range of values. When ERMS is not
> > > > +available, however, non-temporal stores become preferable at a much
> > > > +lower threshold.
> > > > +
> > > > +This tunable is specific to i386 and x86-64.
> > > > +@end deftp
> > > > +
> > > >  @deftp Tunable glibc.cpu.x86_rep_movsb_threshold
> > > >  The @code{glibc.cpu.x86_rep_movsb_threshold} tunable allows the user to
> > > >  set threshold in bytes to start using "rep movsb".  The value must be
> > > > diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
> > > > index ec1bc142c4..1083bd6018 100644
> > > > --- a/sysdeps/x86/cacheinfo.h
> > > > +++ b/sysdeps/x86/cacheinfo.h
> > > > @@ -35,9 +35,12 @@ long int __x86_data_cache_size attribute_hidden = 32 * 1024;
> > > >  long int __x86_shared_cache_size_half attribute_hidden = 1024 * 1024 / 2;
> > > >  long int __x86_shared_cache_size attribute_hidden = 1024 * 1024;
> > > >
> > > > -/* Threshold to use non temporal store.  */
> > > > +/* Threshold to use non temporal store if ERMS is available.  */
> > > >  long int __x86_shared_non_temporal_threshold attribute_hidden;
> > > >
> > > > +/* Threshold to use non temporal store if ERMS is not available.  */
> > > > +long int __x86_shared_non_temporal_threshold_no_erms attribute_hidden;
> > > > +
> > > >  /* Threshold to use Enhanced REP MOVSB.  */
> > > >  long int __x86_rep_movsb_threshold attribute_hidden = 2048;
> > > >
> > > > @@ -77,6 +80,9 @@ init_cacheinfo (void)
> > > >    __x86_shared_non_temporal_threshold
> > > >      = cpu_features->non_temporal_threshold;
> > > >
> > > > +  __x86_shared_non_temporal_threshold_no_erms
> > > > +      = cpu_features->non_temporal_threshold_no_erms;
> > > > +
> > > >    __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
> > > >    __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
> > > >    __x86_rep_movsb_stop_threshold =  cpu_features->rep_movsb_stop_threshold;
> > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > > index ec88945b39..94d5c6183a 100644
> > > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > > @@ -407,7 +407,7 @@ handle_zhaoxin (int name)
> > > >  }
> > > >
> > > >  static void
> > > > -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > > +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
> > > >                  long int core)
> > > >  {
> > > >    unsigned int eax;
> > > > @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >    unsigned int family = cpu_features->basic.family;
> > > >    unsigned int model = cpu_features->basic.model;
> > > >    long int shared = *shared_ptr;
> > > > +  long int shared_per_thread = *shared_per_thread_ptr;
> > > >    unsigned int threads = *threads_ptr;
> > > >    bool inclusive_cache = true;
> > > >    bool support_count_mask = true;
> > > > @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >        /* Try L2 otherwise.  */
> > > >        level  = 2;
> > > >        shared = core;
> > > > +      shared_per_thread = core;
> > > >        threads_l2 = 0;
> > > >        threads_l3 = -1;
> > > >      }
> > > > @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
> > > >          }
> > > >        else
> > > >          {
> > > > -intel_bug_no_cache_info:
> > > > -          /* Assume that all logical threads share the highest cache
> > > > -             level.  */
> > > > -          threads
> > > > -            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > -              & 0xff);
> > > > -        }
> > > > -
> > > > -        /* Cap usage of highest cache level to the number of supported
> > > > -           threads.  */
> > > > -        if (shared > 0 && threads > 0)
> > > > -          shared /= threads;
> > > > +       intel_bug_no_cache_info:
> > > > +         /* Assume that all logical threads share the highest cache
> > > > +            level.  */
> > > > +         threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > > +                    & 0xff);
> > > > +
> > > > +         /* Get per-thread size of highest level cache.  */
> > > > +         if (shared_per_thread > 0 && threads > 0)
> > > > +           shared_per_thread /= threads;
> > > > +       }
> > > >      }
> > > >
> > > >    /* Account for non-inclusive L2 and L3 caches.  */
> > > >    if (!inclusive_cache)
> > > >      {
> > > >        if (threads_l2 > 0)
> > > > -        core /= threads_l2;
> > > > +       shared_per_thread += core / threads_l2;
> > > >        shared += core;
> > > >      }
> > > >
> > > >    *shared_ptr = shared;
> > > > +  *shared_per_thread_ptr = shared_per_thread;
> > > >    *threads_ptr = threads;
> > > >  }
> > > >
> > > > @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    /* Find out what brand of processor.  */
> > > >    long int data = -1;
> > > >    long int shared = -1;
> > > > +  long int shared_per_thread = -1;
> > > >    long int core = -1;
> > > >    unsigned int threads = 0;
> > > >    unsigned long int level1_icache_size = -1;
> > > > @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
> > > >        core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
> > > >        shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size
> > > >         = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
> > > > @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        level4_cache_size
> > > >         = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
> > > >
> > > > -      get_common_cache_info (&shared, &threads, core);
> > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > >      }
> > > >    else if (cpu_features->basic.kind == arch_kind_zhaoxin)
> > > >      {
> > > >        data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
> > > >        core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
> > > >        shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
> > > >        level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
> > > >        level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
> > > >
> > > > -      get_common_cache_info (&shared, &threads, core);
> > > > +      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
> > > >      }
> > > >    else if (cpu_features->basic.kind == arch_kind_amd)
> > > >      {
> > > >        data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
> > > >        core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
> > > >        shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
> > > > +      shared_per_thread = shared;
> > > >
> > > >        level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
> > > >        level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
> > > > @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        if (shared <= 0)
> > > >          /* No shared L3 cache.  All we have is the L2 cache.  */
> > > >         shared = core;
> > > > +
> > > > +      if (shared_per_thread <= 0)
> > > > +       shared_per_thread = shared;
> > > >      }
> > > >
> > > >    cpu_features->level1_icache_size = level1_icache_size;
> > > > @@ -730,17 +738,24 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > > >    cpu_features->level4_cache_size = level4_cache_size;
> > > >
> > > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > > -     in cache after a maximum temporal copy, which will maintain
> > > > -     in cache a reasonable portion of the thread's stack and other
> > > > -     active data. If the threshold is set higher than one thread's
> > > > -     share of the cache, it has a substantial risk of negatively
> > > > -     impacting the performance of other threads running on the chip. */
> > > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > > > +     of chip's cache. For most Intel and AMD processors with an
> > > > +     initial release date between 2017 and 2023, a thread's typical
> > > > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > > > +     estimate the point where non-temporal stores begin outcompeting
> > > > +     other methods. As well the point where the fact that non-temporal
> > > > +     stores are forced back to disk would already occured to the
> > > > +     majority of the lines in the copy. Note, concerns about the
> > > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > > +     by the fact that modern HW detects streaming patterns and
> > > > +     provides proper LRU hints so that the the maximum thrashing
> > > > +     capped at 1/assosiativity. */
> > > > +  unsigned long int non_temporal_threshold = shared / 2;
> > > > +  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
> > > > +     a higher risk of actually thrashing the cache as they don't have a HW LRU
> > > > +     hint. As well, there performance in highly parallel situations is
> > > > +     noticeably worse.  */
> > > > +  unsigned long int non_temporal_threshold_no_erms = shared_per_thread * 3 / 4;
> > > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > > @@ -754,6 +769,11 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    else if (non_temporal_threshold > maximum_non_temporal_threshold)
> > > >      non_temporal_threshold = maximum_non_temporal_threshold;
> > > >
> > > > +  if (non_temporal_threshold_no_erms < minimum_non_temporal_threshold)
> > > > +    non_temporal_threshold_no_erms = minimum_non_temporal_threshold;
> > > > +  else if (non_temporal_threshold_no_erms > maximum_non_temporal_threshold)
> > > > +    non_temporal_threshold_no_erms = maximum_non_temporal_threshold;
> > > > +
> > > >    /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8.  */
> > > >    unsigned int minimum_rep_movsb_threshold;
> > > >    /* NB: The default REP MOVSB threshold is 4096 * (VEC_SIZE / 16) for
> > > > @@ -802,6 +822,12 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >        && tunable_size <= maximum_non_temporal_threshold)
> > > >      non_temporal_threshold = tunable_size;
> > > >
> > > > +  tunable_size
> > > > +      = TUNABLE_GET (x86_non_temporal_threshold_no_erms, long int, NULL);
> > > > +  if (tunable_size > minimum_non_temporal_threshold
> > > > +      && tunable_size <= maximum_non_temporal_threshold)
> > > > +    non_temporal_threshold_no_erms = tunable_size;
> > > > +
> > > >    tunable_size = TUNABLE_GET (x86_rep_movsb_threshold, long int, NULL);
> > > >    if (tunable_size > minimum_rep_movsb_threshold)
> > > >      rep_movsb_threshold = tunable_size;
> > > > @@ -817,6 +843,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_threshold,
> > > >                            minimum_non_temporal_threshold,
> > > >                            maximum_non_temporal_threshold);
> > > > +  TUNABLE_SET_WITH_BOUNDS (
> > > > +      x86_non_temporal_threshold_no_erms, non_temporal_threshold_no_erms,
> > > > +      minimum_non_temporal_threshold, maximum_non_temporal_threshold);
> > > >    TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshold,
> > > >                            minimum_rep_movsb_threshold, SIZE_MAX);
> > > >    TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshold, 1,
> > > > @@ -837,6 +866,8 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    cpu_features->data_cache_size = data;
> > > >    cpu_features->shared_cache_size = shared;
> > > >    cpu_features->non_temporal_threshold = non_temporal_threshold;
> > > > +  cpu_features->non_temporal_threshold_no_erms
> > > > +      = non_temporal_threshold_no_erms;
> > > >    cpu_features->rep_movsb_threshold = rep_movsb_threshold;
> > > >    cpu_features->rep_stosb_threshold = rep_stosb_threshold;
> > > >    cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
> > > > diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c
> > > > index a1578e4665..5c09472a10 100644
> > > > --- a/sysdeps/x86/dl-diagnostics-cpu.c
> > > > +++ b/sysdeps/x86/dl-diagnostics-cpu.c
> > > > @@ -83,6 +83,8 @@ _dl_diagnostics_cpu (void)
> > > >                              cpu_features->shared_cache_size);
> > > >    print_cpu_features_value ("non_temporal_threshold",
> > > >                              cpu_features->non_temporal_threshold);
> > > > +  print_cpu_features_value ("non_temporal_threshold_no_erms",
> > > > +                           cpu_features->non_temporal_threshold_no_erms);
> > > >    print_cpu_features_value ("rep_movsb_threshold",
> > > >                              cpu_features->rep_movsb_threshold);
> > > >    print_cpu_features_value ("rep_movsb_stop_threshold",
> > > > diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list
> > > > index feb7004036..aac6341716 100644
> > > > --- a/sysdeps/x86/dl-tunables.list
> > > > +++ b/sysdeps/x86/dl-tunables.list
> > > > @@ -30,6 +30,9 @@ glibc {
> > > >      x86_non_temporal_threshold {
> > > >        type: SIZE_T
> > > >      }
> > > > +    x86_non_temporal_threshold_no_erms {
> > > > +      type: SIZE_T
> > > > +    }
> > > >      x86_rep_movsb_threshold {
> > > >        type: SIZE_T
> > > >        # Since there is overhead to set up REP MOVSB operation, REP
> > > > diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
> > > > index 40b8129d6a..df6c561eac 100644
> > > > --- a/sysdeps/x86/include/cpu-features.h
> > > > +++ b/sysdeps/x86/include/cpu-features.h
> > > > @@ -913,8 +913,10 @@ struct cpu_features
> > > >    /* Shared cache size for use in memory and string routines, typically
> > > >       L2 or L3 size.  */
> > > >    unsigned long int shared_cache_size;
> > > > -  /* Threshold to use non temporal store.  */
> > > > +  /* Threshold to use non temporal store if ERMS is available.  */
> > > >    unsigned long int non_temporal_threshold;
> > > > +  /* Threshold to use non temporal store if ERMS is not available.  */
> > > > +  unsigned long int non_temporal_threshold_no_erms;
> > > >    /* Threshold to use "rep movsb".  */
> > > >    unsigned long int rep_movsb_threshold;
> > > >    /* Threshold to stop using "rep movsb".  */
> > > > diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > > index d1b92785b0..856c3daf3b 100644
> > > > --- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > > +++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
> > > > @@ -424,8 +424,16 @@ L(more_8x_vec):
> > > >         jb      L(more_8x_vec_backward_check_nop)
> > > >         /* Check if non-temporal move candidate.  */
> > > >  #if (defined USE_MULTIARCH || VEC_SIZE == 16) && IS_IN (libc)
> > > > -       /* Check non-temporal store threshold.  */
> > > > -       cmp     __x86_shared_non_temporal_threshold(%rip), %RDX_LP
> > > > +       /* Check non-temporal store threshold if ERMS is not available.
> > > > +          NB: This path is only hit if we jumped here from L(more_2x_vec).
> > > > +          If we went to L(movsb), then we enter at either the forward loop
> > > > +          directly or go to the backward loop.
> > > > +
> > > > +          WARNING: `__x86_shared_non_temporal_threshold_no_erms` should
> > > > +          NEVER be used in a control flow that could come from
> > > > +          L(movsb_more_2x_vec) without checking checkout
> > > > +          `__x86_rep_movsb_threshold` first.  */
> > > > +       cmp     __x86_shared_non_temporal_threshold_no_erms(%rip), %RDX_LP
> > > >         ja      L(large_memcpy_2x)
> > > >  #endif
> > > >         /* To reach this point there cannot be overlap and dst > src. So
> > > > --
> > > > 2.34.1
> > > >
> > >
> > >
> > > --
> > > H.J.
>
>
>
> --
> H.J.
  

Patch

diff --git a/manual/tunables.texi b/manual/tunables.texi
index 130f94b2bc..8320e724f0 100644
--- a/manual/tunables.texi
+++ b/manual/tunables.texi
@@ -52,6 +52,7 @@  glibc.elision.skip_lock_busy: 3 (min: 0, max: 2147483647)
 glibc.malloc.top_pad: 0x20000 (min: 0x0, max: 0xffffffffffffffff)
 glibc.cpu.x86_rep_stosb_threshold: 0x800 (min: 0x1, max: 0xffffffffffffffff)
 glibc.cpu.x86_non_temporal_threshold: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
+glibc.cpu.x86_non_temporal_threshold_no_erms: 0xc0000 (min: 0x4040, max: 0xfffffffffffffff)
 glibc.cpu.x86_shstk:
 glibc.pthread.stack_cache_size: 0x2800000 (min: 0x0, max: 0xffffffffffffffff)
 glibc.cpu.hwcap_mask: 0x6 (min: 0x0, max: 0xffffffffffffffff)
@@ -486,7 +487,8 @@  thread stack originally backup by Huge Pages to default pages.
 @cindex shared_cache_size tunables
 @cindex tunables, shared_cache_size
 @cindex non_temporal_threshold tunables
-@cindex tunables, non_temporal_threshold
+@cindex non_temporal_threshold tunables_no_erms
+@cindex tunables, non_temporal_threshold, non_temporal_threshold_no_erms
 
 @deftp {Tunable namespace} glibc.cpu
 Behavior of @theglibc{} can be tuned to assume specific hardware capabilities
@@ -559,6 +561,18 @@  like memmove and memcpy.
 This tunable is specific to i386 and x86-64.
 @end deftp
 
+@deftp Tunable glibc.cpu.x86_non_temporal_threshold_no_erms
+The @code{glibc.cpu.x86_non_temporal_threshold_no_erms} is similiar to
+the above, but is used specifically when the ERMS feature is not
+available. ERMS function are often implemented with optimizations for
+large streaming workloads. This often makes it a better choice than
+non-temporal stores for a wider-range of values. When ERMS is not
+available, however, non-temporal stores become preferable at a much
+lower threshold.
+
+This tunable is specific to i386 and x86-64.
+@end deftp
+
 @deftp Tunable glibc.cpu.x86_rep_movsb_threshold
 The @code{glibc.cpu.x86_rep_movsb_threshold} tunable allows the user to
 set threshold in bytes to start using "rep movsb".  The value must be
diff --git a/sysdeps/x86/cacheinfo.h b/sysdeps/x86/cacheinfo.h
index ec1bc142c4..1083bd6018 100644
--- a/sysdeps/x86/cacheinfo.h
+++ b/sysdeps/x86/cacheinfo.h
@@ -35,9 +35,12 @@  long int __x86_data_cache_size attribute_hidden = 32 * 1024;
 long int __x86_shared_cache_size_half attribute_hidden = 1024 * 1024 / 2;
 long int __x86_shared_cache_size attribute_hidden = 1024 * 1024;
 
-/* Threshold to use non temporal store.  */
+/* Threshold to use non temporal store if ERMS is available.  */
 long int __x86_shared_non_temporal_threshold attribute_hidden;
 
+/* Threshold to use non temporal store if ERMS is not available.  */
+long int __x86_shared_non_temporal_threshold_no_erms attribute_hidden;
+
 /* Threshold to use Enhanced REP MOVSB.  */
 long int __x86_rep_movsb_threshold attribute_hidden = 2048;
 
@@ -77,6 +80,9 @@  init_cacheinfo (void)
   __x86_shared_non_temporal_threshold
     = cpu_features->non_temporal_threshold;
 
+  __x86_shared_non_temporal_threshold_no_erms
+      = cpu_features->non_temporal_threshold_no_erms;
+
   __x86_rep_movsb_threshold = cpu_features->rep_movsb_threshold;
   __x86_rep_stosb_threshold = cpu_features->rep_stosb_threshold;
   __x86_rep_movsb_stop_threshold =  cpu_features->rep_movsb_stop_threshold;
diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index ec88945b39..94d5c6183a 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -407,7 +407,7 @@  handle_zhaoxin (int name)
 }
 
 static void
-get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
+get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr,
                 long int core)
 {
   unsigned int eax;
@@ -426,6 +426,7 @@  get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
   unsigned int family = cpu_features->basic.family;
   unsigned int model = cpu_features->basic.model;
   long int shared = *shared_ptr;
+  long int shared_per_thread = *shared_per_thread_ptr;
   unsigned int threads = *threads_ptr;
   bool inclusive_cache = true;
   bool support_count_mask = true;
@@ -441,6 +442,7 @@  get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
       /* Try L2 otherwise.  */
       level  = 2;
       shared = core;
+      shared_per_thread = core;
       threads_l2 = 0;
       threads_l3 = -1;
     }
@@ -597,29 +599,28 @@  get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr,
         }
       else
         {
-intel_bug_no_cache_info:
-          /* Assume that all logical threads share the highest cache
-             level.  */
-          threads
-            = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
-	       & 0xff);
-        }
-
-        /* Cap usage of highest cache level to the number of supported
-           threads.  */
-        if (shared > 0 && threads > 0)
-          shared /= threads;
+	intel_bug_no_cache_info:
+	  /* Assume that all logical threads share the highest cache
+	     level.  */
+	  threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
+		     & 0xff);
+
+	  /* Get per-thread size of highest level cache.  */
+	  if (shared_per_thread > 0 && threads > 0)
+	    shared_per_thread /= threads;
+	}
     }
 
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
     {
       if (threads_l2 > 0)
-        core /= threads_l2;
+	shared_per_thread += core / threads_l2;
       shared += core;
     }
 
   *shared_ptr = shared;
+  *shared_per_thread_ptr = shared_per_thread;
   *threads_ptr = threads;
 }
 
@@ -629,6 +630,7 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
   /* Find out what brand of processor.  */
   long int data = -1;
   long int shared = -1;
+  long int shared_per_thread = -1;
   long int core = -1;
   unsigned int threads = 0;
   unsigned long int level1_icache_size = -1;
@@ -649,6 +651,7 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
       data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features);
       core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features);
       shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features);
+      shared_per_thread = shared;
 
       level1_icache_size
 	= handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features);
@@ -672,13 +675,14 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
       level4_cache_size
 	= handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_zhaoxin)
     {
       data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -692,13 +696,14 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
       level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC);
       level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE);
 
-      get_common_cache_info (&shared, &threads, core);
+      get_common_cache_info (&shared, &shared_per_thread, &threads, core);
     }
   else if (cpu_features->basic.kind == arch_kind_amd)
     {
       data = handle_amd (_SC_LEVEL1_DCACHE_SIZE);
       core = handle_amd (_SC_LEVEL2_CACHE_SIZE);
       shared = handle_amd (_SC_LEVEL3_CACHE_SIZE);
+      shared_per_thread = shared;
 
       level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE);
       level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE);
@@ -715,6 +720,9 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
       if (shared <= 0)
         /* No shared L3 cache.  All we have is the L2 cache.  */
 	shared = core;
+
+      if (shared_per_thread <= 0)
+	shared_per_thread = shared;
     }
 
   cpu_features->level1_icache_size = level1_icache_size;
@@ -730,17 +738,24 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 3/4 of one
-     thread's share of the chip's cache. For most Intel and AMD processors
-     with an initial release date between 2017 and 2020, a thread's typical
-     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
-     threshold leaves 125 KBytes to 500 KBytes of the thread's data
-     in cache after a maximum temporal copy, which will maintain
-     in cache a reasonable portion of the thread's stack and other
-     active data. If the threshold is set higher than one thread's
-     share of the cache, it has a substantial risk of negatively
-     impacting the performance of other threads running on the chip. */
-  unsigned long int non_temporal_threshold = shared * 3 / 4;
+  /* The default setting for the non_temporal threshold is 1/2 of size
+     of chip's cache. For most Intel and AMD processors with an
+     initial release date between 2017 and 2023, a thread's typical
+     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
+     estimate the point where non-temporal stores begin outcompeting
+     other methods. As well the point where the fact that non-temporal
+     stores are forced back to disk would already occured to the
+     majority of the lines in the copy. Note, concerns about the
+     entire L3 cache being evicted by the copy are mostly alleviated
+     by the fact that modern HW detects streaming patterns and
+     provides proper LRU hints so that the the maximum thrashing
+     capped at 1/assosiativity. */
+  unsigned long int non_temporal_threshold = shared / 2;
+  /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run
+     a higher risk of actually thrashing the cache as they don't have a HW LRU
+     hint. As well, there performance in highly parallel situations is
+     noticeably worse.  */
+  unsigned long int non_temporal_threshold_no_erms = shared_per_thread * 3 / 4;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the
@@ -754,6 +769,11 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
   else if (non_temporal_threshold > maximum_non_temporal_threshold)
     non_temporal_threshold = maximum_non_temporal_threshold;
 
+  if (non_temporal_threshold_no_erms < minimum_non_temporal_threshold)
+    non_temporal_threshold_no_erms = minimum_non_temporal_threshold;
+  else if (non_temporal_threshold_no_erms > maximum_non_temporal_threshold)
+    non_temporal_threshold_no_erms = maximum_non_temporal_threshold;
+
   /* NB: The REP MOVSB threshold must be greater than VEC_SIZE * 8.  */
   unsigned int minimum_rep_movsb_threshold;
   /* NB: The default REP MOVSB threshold is 4096 * (VEC_SIZE / 16) for
@@ -802,6 +822,12 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
       && tunable_size <= maximum_non_temporal_threshold)
     non_temporal_threshold = tunable_size;
 
+  tunable_size
+      = TUNABLE_GET (x86_non_temporal_threshold_no_erms, long int, NULL);
+  if (tunable_size > minimum_non_temporal_threshold
+      && tunable_size <= maximum_non_temporal_threshold)
+    non_temporal_threshold_no_erms = tunable_size;
+
   tunable_size = TUNABLE_GET (x86_rep_movsb_threshold, long int, NULL);
   if (tunable_size > minimum_rep_movsb_threshold)
     rep_movsb_threshold = tunable_size;
@@ -817,6 +843,9 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
   TUNABLE_SET_WITH_BOUNDS (x86_non_temporal_threshold, non_temporal_threshold,
 			   minimum_non_temporal_threshold,
 			   maximum_non_temporal_threshold);
+  TUNABLE_SET_WITH_BOUNDS (
+      x86_non_temporal_threshold_no_erms, non_temporal_threshold_no_erms,
+      minimum_non_temporal_threshold, maximum_non_temporal_threshold);
   TUNABLE_SET_WITH_BOUNDS (x86_rep_movsb_threshold, rep_movsb_threshold,
 			   minimum_rep_movsb_threshold, SIZE_MAX);
   TUNABLE_SET_WITH_BOUNDS (x86_rep_stosb_threshold, rep_stosb_threshold, 1,
@@ -837,6 +866,8 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->data_cache_size = data;
   cpu_features->shared_cache_size = shared;
   cpu_features->non_temporal_threshold = non_temporal_threshold;
+  cpu_features->non_temporal_threshold_no_erms
+      = non_temporal_threshold_no_erms;
   cpu_features->rep_movsb_threshold = rep_movsb_threshold;
   cpu_features->rep_stosb_threshold = rep_stosb_threshold;
   cpu_features->rep_movsb_stop_threshold = rep_movsb_stop_threshold;
diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c
index a1578e4665..5c09472a10 100644
--- a/sysdeps/x86/dl-diagnostics-cpu.c
+++ b/sysdeps/x86/dl-diagnostics-cpu.c
@@ -83,6 +83,8 @@  _dl_diagnostics_cpu (void)
                             cpu_features->shared_cache_size);
   print_cpu_features_value ("non_temporal_threshold",
                             cpu_features->non_temporal_threshold);
+  print_cpu_features_value ("non_temporal_threshold_no_erms",
+			    cpu_features->non_temporal_threshold_no_erms);
   print_cpu_features_value ("rep_movsb_threshold",
                             cpu_features->rep_movsb_threshold);
   print_cpu_features_value ("rep_movsb_stop_threshold",
diff --git a/sysdeps/x86/dl-tunables.list b/sysdeps/x86/dl-tunables.list
index feb7004036..aac6341716 100644
--- a/sysdeps/x86/dl-tunables.list
+++ b/sysdeps/x86/dl-tunables.list
@@ -30,6 +30,9 @@  glibc {
     x86_non_temporal_threshold {
       type: SIZE_T
     }
+    x86_non_temporal_threshold_no_erms {
+      type: SIZE_T
+    }
     x86_rep_movsb_threshold {
       type: SIZE_T
       # Since there is overhead to set up REP MOVSB operation, REP
diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h
index 40b8129d6a..df6c561eac 100644
--- a/sysdeps/x86/include/cpu-features.h
+++ b/sysdeps/x86/include/cpu-features.h
@@ -913,8 +913,10 @@  struct cpu_features
   /* Shared cache size for use in memory and string routines, typically
      L2 or L3 size.  */
   unsigned long int shared_cache_size;
-  /* Threshold to use non temporal store.  */
+  /* Threshold to use non temporal store if ERMS is available.  */
   unsigned long int non_temporal_threshold;
+  /* Threshold to use non temporal store if ERMS is not available.  */
+  unsigned long int non_temporal_threshold_no_erms;
   /* Threshold to use "rep movsb".  */
   unsigned long int rep_movsb_threshold;
   /* Threshold to stop using "rep movsb".  */
diff --git a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
index d1b92785b0..856c3daf3b 100644
--- a/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
+++ b/sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S
@@ -424,8 +424,16 @@  L(more_8x_vec):
 	jb	L(more_8x_vec_backward_check_nop)
 	/* Check if non-temporal move candidate.  */
 #if (defined USE_MULTIARCH || VEC_SIZE == 16) && IS_IN (libc)
-	/* Check non-temporal store threshold.  */
-	cmp	__x86_shared_non_temporal_threshold(%rip), %RDX_LP
+	/* Check non-temporal store threshold if ERMS is not available.
+	   NB: This path is only hit if we jumped here from L(more_2x_vec).
+	   If we went to L(movsb), then we enter at either the forward loop
+	   directly or go to the backward loop.
+
+	   WARNING: `__x86_shared_non_temporal_threshold_no_erms` should
+	   NEVER be used in a control flow that could come from
+	   L(movsb_more_2x_vec) without checking checkout
+	   `__x86_rep_movsb_threshold` first.  */
+	cmp	__x86_shared_non_temporal_threshold_no_erms(%rip), %RDX_LP
 	ja	L(large_memcpy_2x)
 #endif
 	/* To reach this point there cannot be overlap and dst > src. So