[v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2`

Message ID 20230424050329.1501348-1-goldstein.w.n@gmail.com
State Superseded
Headers
Series [v1] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 2` |

Checks

Context Check Description
dj/TryBot-32bit success Build for i686
dj/TryBot-apply_patch success Patch applied to master at the time it was sent

Commit Message

Noah Goldstein April 24, 2023, 5:03 a.m. UTC
  Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 2`

The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.

Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal threshholds and leads to using non-temporal stores in
cases where `rep movsb` is multiple times faster.

Furthermore, non-temporal stores are written directly to disk so using
it at a size much smaller than L3 can place soon to be accessed data
much further away than it otherwise could be. As well, modern machines
are able to detect streaming patterns (especially if `rep movsb` is
used) and provide LRU hints to the memory subsystem. This in affect
caps the total amount of eviction at 1/cache_assosiativity, far below
meaningfully thrashing the entire cache.

As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be `rep movsb` which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, `rep movsb` is ~2x
faster up to `sizeof_L3`.

Benchmarks comparing non-temporal stores, rep movsb, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks

Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
---
 sysdeps/x86/dl-cacheinfo.h | 35 ++++++++++++++---------------------
 1 file changed, 14 insertions(+), 21 deletions(-)
  

Comments

H.J. Lu April 24, 2023, 6:09 p.m. UTC | #1
On Sun, Apr 23, 2023 at 10:03 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> ncores_per_socket'. This patch updates that value to roughly
> 'sizeof_L3 / 2`
>
> The original value (specifically dividing the `ncores_per_socket`) was
> done to limit the amount of other threads' data a `memcpy`/`memset`
> could evict.
>
> Dividing by 'ncores_per_socket', however leads to exceedingly low
> non-temporal threshholds and leads to using non-temporal stores in
> cases where `rep movsb` is multiple times faster.
>
> Furthermore, non-temporal stores are written directly to disk so using

Why is "disk" here?

> it at a size much smaller than L3 can place soon to be accessed data
> much further away than it otherwise could be. As well, modern machines
> are able to detect streaming patterns (especially if `rep movsb` is
> used) and provide LRU hints to the memory subsystem. This in affect
> caps the total amount of eviction at 1/cache_assosiativity, far below
> meaningfully thrashing the entire cache.
>
> As best I can tell, the benchmarks that lead this small threshold
> where done comparing non-temporal stores versus standard cacheable
> stores. A better comparison (linked below) is to be `rep movsb` which,
> on the measure systems, is nearly 2x faster than non-temporal stores
> at the low-end of the previous threshold, and within 10% for over
> 100MB copies (well past even the current threshold). In cases with a
> low number of threads competing for bandwidth, `rep movsb` is ~2x
> faster up to `sizeof_L3`.
>

Should we limit it to processors with ERMS  (Enhanced REP MOVSB/STOSB)?

> Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> stores where done using:
> https://github.com/goldsteinn/memcpy-nt-benchmarks
>
> Sheets results (also available in pdf on the github):
> https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> ---
>  sysdeps/x86/dl-cacheinfo.h | 35 ++++++++++++++---------------------
>  1 file changed, 14 insertions(+), 21 deletions(-)
>
> diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> index ec88945b39..f25309dbc8 100644
> --- a/sysdeps/x86/dl-cacheinfo.h
> +++ b/sysdeps/x86/dl-cacheinfo.h
> @@ -604,20 +604,11 @@ intel_bug_no_cache_info:
>              = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
>                & 0xff);
>          }
> -
> -        /* Cap usage of highest cache level to the number of supported
> -           threads.  */
> -        if (shared > 0 && threads > 0)
> -          shared /= threads;
>      }
>
>    /* Account for non-inclusive L2 and L3 caches.  */
>    if (!inclusive_cache)
> -    {
> -      if (threads_l2 > 0)
> -        core /= threads_l2;
> -      shared += core;
> -    }
> +    shared += core;
>
>    *shared_ptr = shared;
>    *threads_ptr = threads;
> @@ -730,17 +721,19 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
>    cpu_features->level3_cache_linesize = level3_cache_linesize;
>    cpu_features->level4_cache_size = level4_cache_size;
>
> -  /* The default setting for the non_temporal threshold is 3/4 of one
> -     thread's share of the chip's cache. For most Intel and AMD processors
> -     with an initial release date between 2017 and 2020, a thread's typical
> -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> -     in cache after a maximum temporal copy, which will maintain
> -     in cache a reasonable portion of the thread's stack and other
> -     active data. If the threshold is set higher than one thread's
> -     share of the cache, it has a substantial risk of negatively
> -     impacting the performance of other threads running on the chip. */
> -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> +  /* The default setting for the non_temporal threshold is 1/2 of size
> +     of chip's cache. For most Intel and AMD processors with an
> +     initial release date between 2017 and 2023, a thread's typical
> +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> +     estimate the point where non-temporal stores begin outcompeting
> +     other methods. As well the point where the fact that non-temporal
> +     stores are forced back to disk would already occured to the
> +     majority of the lines in the copy. Note, concerns about the
> +     entire L3 cache being evicted by the copy are mostly alleviated
> +     by the fact that modern HW detects streaming patterns and
> +     provides proper LRU hints so that the the maximum thrashing
> +     capped at 1/assosiativity. */
> +  unsigned long int non_temporal_threshold = shared / 2;
>    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
>       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
>       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> --
> 2.34.1
>
  
Noah Goldstein April 24, 2023, 6:34 p.m. UTC | #2
On Mon, Apr 24, 2023 at 1:10 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Sun, Apr 23, 2023 at 10:03 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > ncores_per_socket'. This patch updates that value to roughly
> > 'sizeof_L3 / 2`
> >
> > The original value (specifically dividing the `ncores_per_socket`) was
> > done to limit the amount of other threads' data a `memcpy`/`memset`
> > could evict.
> >
> > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > non-temporal threshholds and leads to using non-temporal stores in
> > cases where `rep movsb` is multiple times faster.
> >
> > Furthermore, non-temporal stores are written directly to disk so using
>
> Why is "disk" here?
I mean main-memory. Will update in V2.
>
> > it at a size much smaller than L3 can place soon to be accessed data
> > much further away than it otherwise could be. As well, modern machines
> > are able to detect streaming patterns (especially if `rep movsb` is
> > used) and provide LRU hints to the memory subsystem. This in affect
> > caps the total amount of eviction at 1/cache_assosiativity, far below
> > meaningfully thrashing the entire cache.
> >
> > As best I can tell, the benchmarks that lead this small threshold
> > where done comparing non-temporal stores versus standard cacheable
> > stores. A better comparison (linked below) is to be `rep movsb` which,
> > on the measure systems, is nearly 2x faster than non-temporal stores
> > at the low-end of the previous threshold, and within 10% for over
> > 100MB copies (well past even the current threshold). In cases with a
> > low number of threads competing for bandwidth, `rep movsb` is ~2x
> > faster up to `sizeof_L3`.
> >
>
> Should we limit it to processors with ERMS  (Enhanced REP MOVSB/STOSB)?
>
Think that would probably make sense. We see more meaningful regression
for larger sizes when using standard store loop. Think /nthreads is
still too small.
How about
if ERMS: L3/2
else: L3 / (2 * sqrt(nthreads)) ?


> > Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> > stores where done using:
> > https://github.com/goldsteinn/memcpy-nt-benchmarks
> >
> > Sheets results (also available in pdf on the github):
> > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > ---
> >  sysdeps/x86/dl-cacheinfo.h | 35 ++++++++++++++---------------------
> >  1 file changed, 14 insertions(+), 21 deletions(-)
> >
> > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > index ec88945b39..f25309dbc8 100644
> > --- a/sysdeps/x86/dl-cacheinfo.h
> > +++ b/sysdeps/x86/dl-cacheinfo.h
> > @@ -604,20 +604,11 @@ intel_bug_no_cache_info:
> >              = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> >                & 0xff);
> >          }
> > -
> > -        /* Cap usage of highest cache level to the number of supported
> > -           threads.  */
> > -        if (shared > 0 && threads > 0)
> > -          shared /= threads;
> >      }
> >
> >    /* Account for non-inclusive L2 and L3 caches.  */
> >    if (!inclusive_cache)
> > -    {
> > -      if (threads_l2 > 0)
> > -        core /= threads_l2;
> > -      shared += core;
> > -    }
> > +    shared += core;
> >
> >    *shared_ptr = shared;
> >    *threads_ptr = threads;
> > @@ -730,17 +721,19 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> >    cpu_features->level4_cache_size = level4_cache_size;
> >
> > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > -     thread's share of the chip's cache. For most Intel and AMD processors
> > -     with an initial release date between 2017 and 2020, a thread's typical
> > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > -     in cache after a maximum temporal copy, which will maintain
> > -     in cache a reasonable portion of the thread's stack and other
> > -     active data. If the threshold is set higher than one thread's
> > -     share of the cache, it has a substantial risk of negatively
> > -     impacting the performance of other threads running on the chip. */
> > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > +     of chip's cache. For most Intel and AMD processors with an
> > +     initial release date between 2017 and 2023, a thread's typical
> > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > +     estimate the point where non-temporal stores begin outcompeting
> > +     other methods. As well the point where the fact that non-temporal
> > +     stores are forced back to disk would already occured to the
> > +     majority of the lines in the copy. Note, concerns about the
> > +     entire L3 cache being evicted by the copy are mostly alleviated
> > +     by the fact that modern HW detects streaming patterns and
> > +     provides proper LRU hints so that the the maximum thrashing
> > +     capped at 1/assosiativity. */
> > +  unsigned long int non_temporal_threshold = shared / 2;
> >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > --
> > 2.34.1
> >
>
>
> --
> H.J.
  
H.J. Lu April 24, 2023, 8:44 p.m. UTC | #3
On Mon, Apr 24, 2023 at 11:34 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
>
> On Mon, Apr 24, 2023 at 1:10 PM H.J. Lu <hjl.tools@gmail.com> wrote:
> >
> > On Sun, Apr 23, 2023 at 10:03 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > >
> > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > ncores_per_socket'. This patch updates that value to roughly
> > > 'sizeof_L3 / 2`
> > >
> > > The original value (specifically dividing the `ncores_per_socket`) was
> > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > could evict.
> > >
> > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > non-temporal threshholds and leads to using non-temporal stores in
> > > cases where `rep movsb` is multiple times faster.
> > >
> > > Furthermore, non-temporal stores are written directly to disk so using
> >
> > Why is "disk" here?
> I mean main-memory. Will update in V2.
> >
> > > it at a size much smaller than L3 can place soon to be accessed data
> > > much further away than it otherwise could be. As well, modern machines
> > > are able to detect streaming patterns (especially if `rep movsb` is
> > > used) and provide LRU hints to the memory subsystem. This in affect
> > > caps the total amount of eviction at 1/cache_assosiativity, far below
> > > meaningfully thrashing the entire cache.
> > >
> > > As best I can tell, the benchmarks that lead this small threshold
> > > where done comparing non-temporal stores versus standard cacheable
> > > stores. A better comparison (linked below) is to be `rep movsb` which,
> > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > at the low-end of the previous threshold, and within 10% for over
> > > 100MB copies (well past even the current threshold). In cases with a
> > > low number of threads competing for bandwidth, `rep movsb` is ~2x
> > > faster up to `sizeof_L3`.
> > >
> >
> > Should we limit it to processors with ERMS  (Enhanced REP MOVSB/STOSB)?
> >
> Think that would probably make sense. We see more meaningful regression
> for larger sizes when using standard store loop. Think /nthreads is
> still too small.
> How about
> if ERMS: L3/2
> else: L3 / (2 * sqrt(nthreads)) ?

I think we should leave the non-ERMS case unchanged.

>
>
> > > Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> > > stores where done using:
> > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > >
> > > Sheets results (also available in pdf on the github):
> > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > ---
> > >  sysdeps/x86/dl-cacheinfo.h | 35 ++++++++++++++---------------------
> > >  1 file changed, 14 insertions(+), 21 deletions(-)
> > >
> > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > index ec88945b39..f25309dbc8 100644
> > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > @@ -604,20 +604,11 @@ intel_bug_no_cache_info:
> > >              = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > >                & 0xff);
> > >          }
> > > -
> > > -        /* Cap usage of highest cache level to the number of supported
> > > -           threads.  */
> > > -        if (shared > 0 && threads > 0)
> > > -          shared /= threads;
> > >      }
> > >
> > >    /* Account for non-inclusive L2 and L3 caches.  */
> > >    if (!inclusive_cache)
> > > -    {
> > > -      if (threads_l2 > 0)
> > > -        core /= threads_l2;
> > > -      shared += core;
> > > -    }
> > > +    shared += core;
> > >
> > >    *shared_ptr = shared;
> > >    *threads_ptr = threads;
> > > @@ -730,17 +721,19 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > >    cpu_features->level4_cache_size = level4_cache_size;
> > >
> > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > -     in cache after a maximum temporal copy, which will maintain
> > > -     in cache a reasonable portion of the thread's stack and other
> > > -     active data. If the threshold is set higher than one thread's
> > > -     share of the cache, it has a substantial risk of negatively
> > > -     impacting the performance of other threads running on the chip. */
> > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > > +     of chip's cache. For most Intel and AMD processors with an
> > > +     initial release date between 2017 and 2023, a thread's typical
> > > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > > +     estimate the point where non-temporal stores begin outcompeting
> > > +     other methods. As well the point where the fact that non-temporal
> > > +     stores are forced back to disk would already occured to the
> > > +     majority of the lines in the copy. Note, concerns about the
> > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > +     by the fact that modern HW detects streaming patterns and
> > > +     provides proper LRU hints so that the the maximum thrashing
> > > +     capped at 1/assosiativity. */
> > > +  unsigned long int non_temporal_threshold = shared / 2;
> > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > --
> > > 2.34.1
> > >
> >
> >
> > --
> > H.J.
  
Noah Goldstein April 24, 2023, 10:30 p.m. UTC | #4
On Mon, Apr 24, 2023 at 3:44 PM H.J. Lu <hjl.tools@gmail.com> wrote:
>
> On Mon, Apr 24, 2023 at 11:34 AM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> >
> > On Mon, Apr 24, 2023 at 1:10 PM H.J. Lu <hjl.tools@gmail.com> wrote:
> > >
> > > On Sun, Apr 23, 2023 at 10:03 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote:
> > > >
> > > > Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
> > > > ncores_per_socket'. This patch updates that value to roughly
> > > > 'sizeof_L3 / 2`
> > > >
> > > > The original value (specifically dividing the `ncores_per_socket`) was
> > > > done to limit the amount of other threads' data a `memcpy`/`memset`
> > > > could evict.
> > > >
> > > > Dividing by 'ncores_per_socket', however leads to exceedingly low
> > > > non-temporal threshholds and leads to using non-temporal stores in
> > > > cases where `rep movsb` is multiple times faster.
> > > >
> > > > Furthermore, non-temporal stores are written directly to disk so using
> > >
> > > Why is "disk" here?
> > I mean main-memory. Will update in V2.
> > >
> > > > it at a size much smaller than L3 can place soon to be accessed data
> > > > much further away than it otherwise could be. As well, modern machines
> > > > are able to detect streaming patterns (especially if `rep movsb` is
> > > > used) and provide LRU hints to the memory subsystem. This in affect
> > > > caps the total amount of eviction at 1/cache_assosiativity, far below
> > > > meaningfully thrashing the entire cache.
> > > >
> > > > As best I can tell, the benchmarks that lead this small threshold
> > > > where done comparing non-temporal stores versus standard cacheable
> > > > stores. A better comparison (linked below) is to be `rep movsb` which,
> > > > on the measure systems, is nearly 2x faster than non-temporal stores
> > > > at the low-end of the previous threshold, and within 10% for over
> > > > 100MB copies (well past even the current threshold). In cases with a
> > > > low number of threads competing for bandwidth, `rep movsb` is ~2x
> > > > faster up to `sizeof_L3`.
> > > >
> > >
> > > Should we limit it to processors with ERMS  (Enhanced REP MOVSB/STOSB)?
> > >
> > Think that would probably make sense. We see more meaningful regression
> > for larger sizes when using standard store loop. Think /nthreads is
> > still too small.
> > How about
> > if ERMS: L3/2
> > else: L3 / (2 * sqrt(nthreads)) ?
>
> I think we should leave the non-ERMS case unchanged.

Done
>
> >
> >
> > > > Benchmarks comparing non-temporal stores, rep movsb, and cacheable
> > > > stores where done using:
> > > > https://github.com/goldsteinn/memcpy-nt-benchmarks
> > > >
> > > > Sheets results (also available in pdf on the github):
> > > > https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml
> > > > ---
> > > >  sysdeps/x86/dl-cacheinfo.h | 35 ++++++++++++++---------------------
> > > >  1 file changed, 14 insertions(+), 21 deletions(-)
> > > >
> > > > diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
> > > > index ec88945b39..f25309dbc8 100644
> > > > --- a/sysdeps/x86/dl-cacheinfo.h
> > > > +++ b/sysdeps/x86/dl-cacheinfo.h
> > > > @@ -604,20 +604,11 @@ intel_bug_no_cache_info:
> > > >              = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
> > > >                & 0xff);
> > > >          }
> > > > -
> > > > -        /* Cap usage of highest cache level to the number of supported
> > > > -           threads.  */
> > > > -        if (shared > 0 && threads > 0)
> > > > -          shared /= threads;
> > > >      }
> > > >
> > > >    /* Account for non-inclusive L2 and L3 caches.  */
> > > >    if (!inclusive_cache)
> > > > -    {
> > > > -      if (threads_l2 > 0)
> > > > -        core /= threads_l2;
> > > > -      shared += core;
> > > > -    }
> > > > +    shared += core;
> > > >
> > > >    *shared_ptr = shared;
> > > >    *threads_ptr = threads;
> > > > @@ -730,17 +721,19 @@ dl_init_cacheinfo (struct cpu_features *cpu_features)
> > > >    cpu_features->level3_cache_linesize = level3_cache_linesize;
> > > >    cpu_features->level4_cache_size = level4_cache_size;
> > > >
> > > > -  /* The default setting for the non_temporal threshold is 3/4 of one
> > > > -     thread's share of the chip's cache. For most Intel and AMD processors
> > > > -     with an initial release date between 2017 and 2020, a thread's typical
> > > > -     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
> > > > -     threshold leaves 125 KBytes to 500 KBytes of the thread's data
> > > > -     in cache after a maximum temporal copy, which will maintain
> > > > -     in cache a reasonable portion of the thread's stack and other
> > > > -     active data. If the threshold is set higher than one thread's
> > > > -     share of the cache, it has a substantial risk of negatively
> > > > -     impacting the performance of other threads running on the chip. */
> > > > -  unsigned long int non_temporal_threshold = shared * 3 / 4;
> > > > +  /* The default setting for the non_temporal threshold is 1/2 of size
> > > > +     of chip's cache. For most Intel and AMD processors with an
> > > > +     initial release date between 2017 and 2023, a thread's typical
> > > > +     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
> > > > +     estimate the point where non-temporal stores begin outcompeting
> > > > +     other methods. As well the point where the fact that non-temporal
> > > > +     stores are forced back to disk would already occured to the
> > > > +     majority of the lines in the copy. Note, concerns about the
> > > > +     entire L3 cache being evicted by the copy are mostly alleviated
> > > > +     by the fact that modern HW detects streaming patterns and
> > > > +     provides proper LRU hints so that the the maximum thrashing
> > > > +     capped at 1/assosiativity. */
> > > > +  unsigned long int non_temporal_threshold = shared / 2;
> > > >    /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
> > > >       'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
> > > >       if that operation cannot overflow. Minimum of 0x4040 (16448) because the
> > > > --
> > > > 2.34.1
> > > >
> > >
> > >
> > > --
> > > H.J.
>
>
>
> --
> H.J.
  

Patch

diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h
index ec88945b39..f25309dbc8 100644
--- a/sysdeps/x86/dl-cacheinfo.h
+++ b/sysdeps/x86/dl-cacheinfo.h
@@ -604,20 +604,11 @@  intel_bug_no_cache_info:
             = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16)
 	       & 0xff);
         }
-
-        /* Cap usage of highest cache level to the number of supported
-           threads.  */
-        if (shared > 0 && threads > 0)
-          shared /= threads;
     }
 
   /* Account for non-inclusive L2 and L3 caches.  */
   if (!inclusive_cache)
-    {
-      if (threads_l2 > 0)
-        core /= threads_l2;
-      shared += core;
-    }
+    shared += core;
 
   *shared_ptr = shared;
   *threads_ptr = threads;
@@ -730,17 +721,19 @@  dl_init_cacheinfo (struct cpu_features *cpu_features)
   cpu_features->level3_cache_linesize = level3_cache_linesize;
   cpu_features->level4_cache_size = level4_cache_size;
 
-  /* The default setting for the non_temporal threshold is 3/4 of one
-     thread's share of the chip's cache. For most Intel and AMD processors
-     with an initial release date between 2017 and 2020, a thread's typical
-     share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4
-     threshold leaves 125 KBytes to 500 KBytes of the thread's data
-     in cache after a maximum temporal copy, which will maintain
-     in cache a reasonable portion of the thread's stack and other
-     active data. If the threshold is set higher than one thread's
-     share of the cache, it has a substantial risk of negatively
-     impacting the performance of other threads running on the chip. */
-  unsigned long int non_temporal_threshold = shared * 3 / 4;
+  /* The default setting for the non_temporal threshold is 1/2 of size
+     of chip's cache. For most Intel and AMD processors with an
+     initial release date between 2017 and 2023, a thread's typical
+     share of the cache is from 18-64MB. Using the 1/2 L3 is meant to
+     estimate the point where non-temporal stores begin outcompeting
+     other methods. As well the point where the fact that non-temporal
+     stores are forced back to disk would already occured to the
+     majority of the lines in the copy. Note, concerns about the
+     entire L3 cache being evicted by the copy are mostly alleviated
+     by the fact that modern HW detects streaming patterns and
+     provides proper LRU hints so that the the maximum thrashing
+     capped at 1/assosiativity. */
+  unsigned long int non_temporal_threshold = shared / 2;
   /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of
      'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best
      if that operation cannot overflow. Minimum of 0x4040 (16448) because the