From patchwork Wed Jun 7 18:18:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 70751 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 90AFC385703C for ; Wed, 7 Jun 2023 18:18:43 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 90AFC385703C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1686161923; bh=n3YfM9/Uz2v/KWZPdn3FQNHOHlUqS9arkyi30LoW9EY=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=OBfGeAlTbXdnt9XqPeCkQTQaJYA9Ku/XiEkxNS1fwn2ZdckCWS3ifbVmctmdnYFLy mk8qDG8Y2VjxeV/llJRz5c1146GJu/Ve6M4Wz1H/lPPv7olWGHwoc2Y06kcTjq6uKZ 9BL42crN7fJYtK7dhruxwv8Bd250haS/CB0f0kTQ= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com [IPv6:2a00:1450:4864:20::62a]) by sourceware.org (Postfix) with ESMTPS id 7E39C3858C54 for ; Wed, 7 Jun 2023 18:18:11 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 7E39C3858C54 Received: by mail-ej1-x62a.google.com with SMTP id a640c23a62f3a-97454836448so1037154266b.2 for ; Wed, 07 Jun 2023 11:18:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686161889; x=1688753889; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=n3YfM9/Uz2v/KWZPdn3FQNHOHlUqS9arkyi30LoW9EY=; b=Uexu2Y0JwBAifRC+EipKeJpt7WxJCFs3dQgRzbBV/e9QKirtbog7tK/txoi+g0PoWX h4lYWcTegVBRFiQwer/ZCEP6Z+f3Hn+qlC+UJVrDOOQ4/rV4Tsm43ZArEkB1f0sKPeAl iNem4Ylc9Chz1DQ4KLuNCIsW3s++e4Tw2UyyTi3yaWKbVCejbCQNYoAis5azHRO7iWDt WTDnfKdP7lTThktAVlxPtpWNorbWN++iKcmNeZxjmTtoPdi+K2WPDJ230wuKEfSZZlgk 1Mxb2sauH3nV8dYRGBB7l15/SPhLc28RbWlw85W1mjlhQvyKJA6iUfe0ZfnNInp4k2Oq iZzg== X-Gm-Message-State: AC+VfDzGeOOZ4tFHUJLXRvRHqSoXFGnAM6dzVMTvH+DVTbmNrOqdNIoV KuEJTJd+ihQIrs4o4JY1joCK0npIE7A= X-Google-Smtp-Source: ACHHUZ7QgAEP+s5zJ+xlKLgXdBFrD6ARUv3lJRxeA4jHhV2GhAO20To8NcZe9piEe/fycCEecc8a2A== X-Received: by 2002:a17:907:9803:b0:974:56aa:6dce with SMTP id ji3-20020a170907980300b0097456aa6dcemr6830300ejc.46.1686161888988; Wed, 07 Jun 2023 11:18:08 -0700 (PDT) Received: from noahgold-desk.intel.com (2603-8080-1301-76c6-bbb0-ef3c-a689-4ab7.res6.spectrum.com. [2603:8080:1301:76c6:bbb0:ef3c:a689:4ab7]) by smtp.gmail.com with ESMTPSA id i17-20020a170906851100b009746023de34sm7162985ejx.150.2023.06.07.11.18.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jun 2023 11:18:08 -0700 (PDT) To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org, DJ Delorie , Carlos O'Donell Subject: [PATCH v11 1/3] x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4` Date: Wed, 7 Jun 2023 13:18:01 -0500 Message-Id: <20230607181803.4154764-1-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230424050329.1501348-1-goldstein.w.n@gmail.com> References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 / ncores_per_socket'. This patch updates that value to roughly 'sizeof_L3 / 4` The original value (specifically dividing the `ncores_per_socket`) was done to limit the amount of other threads' data a `memcpy`/`memset` could evict. Dividing by 'ncores_per_socket', however leads to exceedingly low non-temporal thresholds and leads to using non-temporal stores in cases where REP MOVSB is multiple times faster. Furthermore, non-temporal stores are written directly to main memory so using it at a size much smaller than L3 can place soon to be accessed data much further away than it otherwise could be. As well, modern machines are able to detect streaming patterns (especially if REP MOVSB is used) and provide LRU hints to the memory subsystem. This in affect caps the total amount of eviction at 1/cache_associativity, far below meaningfully thrashing the entire cache. As best I can tell, the benchmarks that lead this small threshold where done comparing non-temporal stores versus standard cacheable stores. A better comparison (linked below) is to be REP MOVSB which, on the measure systems, is nearly 2x faster than non-temporal stores at the low-end of the previous threshold, and within 10% for over 100MB copies (well past even the current threshold). In cases with a low number of threads competing for bandwidth, REP MOVSB is ~2x faster up to `sizeof_L3`. The divisor of `4` is a somewhat arbitrary value. From benchmarks it seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs such as Broadwell prefer something closer to `8`. This patch is meant to be followed up by another one to make the divisor cpu-specific, but in the meantime (and for easier backporting), this patch settles on `4` as a middle-ground. Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable stores where done using: https://github.com/goldsteinn/memcpy-nt-benchmarks Sheets results (also available in pdf on the github): https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml Reviewed-by: DJ Delorie Reviewed-by: Carlos O'Donell --- sysdeps/x86/dl-cacheinfo.h | 70 +++++++++++++++++++++++--------------- 1 file changed, 43 insertions(+), 27 deletions(-) diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h index 877e73d700..3bd3b3ec1b 100644 --- a/sysdeps/x86/dl-cacheinfo.h +++ b/sysdeps/x86/dl-cacheinfo.h @@ -407,7 +407,7 @@ handle_zhaoxin (int name) } static void -get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, +get_common_cache_info (long int *shared_ptr, long int * shared_per_thread_ptr, unsigned int *threads_ptr, long int core) { unsigned int eax; @@ -426,6 +426,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, unsigned int family = cpu_features->basic.family; unsigned int model = cpu_features->basic.model; long int shared = *shared_ptr; + long int shared_per_thread = *shared_per_thread_ptr; unsigned int threads = *threads_ptr; bool inclusive_cache = true; bool support_count_mask = true; @@ -441,6 +442,7 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, /* Try L2 otherwise. */ level = 2; shared = core; + shared_per_thread = core; threads_l2 = 0; threads_l3 = -1; } @@ -597,29 +599,28 @@ get_common_cache_info (long int *shared_ptr, unsigned int *threads_ptr, } else { -intel_bug_no_cache_info: - /* Assume that all logical threads share the highest cache - level. */ - threads - = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16) - & 0xff); - } - - /* Cap usage of highest cache level to the number of supported - threads. */ - if (shared > 0 && threads > 0) - shared /= threads; + intel_bug_no_cache_info: + /* Assume that all logical threads share the highest cache + level. */ + threads = ((cpu_features->features[CPUID_INDEX_1].cpuid.ebx >> 16) + & 0xff); + + /* Get per-thread size of highest level cache. */ + if (shared_per_thread > 0 && threads > 0) + shared_per_thread /= threads; + } } /* Account for non-inclusive L2 and L3 caches. */ if (!inclusive_cache) { if (threads_l2 > 0) - core /= threads_l2; + shared_per_thread += core / threads_l2; shared += core; } *shared_ptr = shared; + *shared_per_thread_ptr = shared_per_thread; *threads_ptr = threads; } @@ -629,6 +630,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) /* Find out what brand of processor. */ long int data = -1; long int shared = -1; + long int shared_per_thread = -1; long int core = -1; unsigned int threads = 0; unsigned long int level1_icache_size = -1; @@ -649,6 +651,7 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) data = handle_intel (_SC_LEVEL1_DCACHE_SIZE, cpu_features); core = handle_intel (_SC_LEVEL2_CACHE_SIZE, cpu_features); shared = handle_intel (_SC_LEVEL3_CACHE_SIZE, cpu_features); + shared_per_thread = shared; level1_icache_size = handle_intel (_SC_LEVEL1_ICACHE_SIZE, cpu_features); @@ -672,13 +675,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) level4_cache_size = handle_intel (_SC_LEVEL4_CACHE_SIZE, cpu_features); - get_common_cache_info (&shared, &threads, core); + get_common_cache_info (&shared, &shared_per_thread, &threads, core); } else if (cpu_features->basic.kind == arch_kind_zhaoxin) { data = handle_zhaoxin (_SC_LEVEL1_DCACHE_SIZE); core = handle_zhaoxin (_SC_LEVEL2_CACHE_SIZE); shared = handle_zhaoxin (_SC_LEVEL3_CACHE_SIZE); + shared_per_thread = shared; level1_icache_size = handle_zhaoxin (_SC_LEVEL1_ICACHE_SIZE); level1_icache_linesize = handle_zhaoxin (_SC_LEVEL1_ICACHE_LINESIZE); @@ -692,13 +696,14 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) level3_cache_assoc = handle_zhaoxin (_SC_LEVEL3_CACHE_ASSOC); level3_cache_linesize = handle_zhaoxin (_SC_LEVEL3_CACHE_LINESIZE); - get_common_cache_info (&shared, &threads, core); + get_common_cache_info (&shared, &shared_per_thread, &threads, core); } else if (cpu_features->basic.kind == arch_kind_amd) { data = handle_amd (_SC_LEVEL1_DCACHE_SIZE); core = handle_amd (_SC_LEVEL2_CACHE_SIZE); shared = handle_amd (_SC_LEVEL3_CACHE_SIZE); + shared_per_thread = shared; level1_icache_size = handle_amd (_SC_LEVEL1_ICACHE_SIZE); level1_icache_linesize = handle_amd (_SC_LEVEL1_ICACHE_LINESIZE); @@ -715,6 +720,9 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) if (shared <= 0) /* No shared L3 cache. All we have is the L2 cache. */ shared = core; + + if (shared_per_thread <= 0) + shared_per_thread = shared; } cpu_features->level1_icache_size = level1_icache_size; @@ -730,17 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) cpu_features->level3_cache_linesize = level3_cache_linesize; cpu_features->level4_cache_size = level4_cache_size; - /* The default setting for the non_temporal threshold is 3/4 of one - thread's share of the chip's cache. For most Intel and AMD processors - with an initial release date between 2017 and 2020, a thread's typical - share of the cache is from 500 KBytes to 2 MBytes. Using the 3/4 - threshold leaves 125 KBytes to 500 KBytes of the thread's data - in cache after a maximum temporal copy, which will maintain - in cache a reasonable portion of the thread's stack and other - active data. If the threshold is set higher than one thread's - share of the cache, it has a substantial risk of negatively - impacting the performance of other threads running on the chip. */ - unsigned long int non_temporal_threshold = shared * 3 / 4; + /* The default setting for the non_temporal threshold is 1/4 of size + of the chip's cache. For most Intel and AMD processors with an + initial release date between 2017 and 2023, a thread's typical + share of the cache is from 18-64MB. Using the 1/4 L3 is meant to + estimate the point where non-temporal stores begin out-competing + REP MOVSB. As well the point where the fact that non-temporal + stores are forced back to main memory would already occurred to the + majority of the lines in the copy. Note, concerns about the + entire L3 cache being evicted by the copy are mostly alleviated + by the fact that modern HW detects streaming patterns and + provides proper LRU hints so that the maximum thrashing + capped at 1/associativity. */ + unsigned long int non_temporal_threshold = shared / 4; + /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run + a higher risk of actually thrashing the cache as they don't have a HW LRU + hint. As well, their performance in highly parallel situations is + noticeably worse. */ + if (!CPU_FEATURE_USABLE_P (cpu_features, ERMS)) + non_temporal_threshold = shared_per_thread * 3 / 4; /* SIZE_MAX >> 4 because memmove-vec-unaligned-erms right-shifts the value of 'x86_non_temporal_threshold' by `LOG_4X_MEMCPY_THRESH` (4) and it is best if that operation cannot overflow. Minimum of 0x4040 (16448) because the From patchwork Wed Jun 7 18:18:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 70752 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id BA2513856976 for ; Wed, 7 Jun 2023 18:18:47 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org BA2513856976 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1686161927; bh=whiwzI0MeMKHl3vnq5nMfl6okllvkyhwQ2me3N5G63I=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=tYWOJHtGzH34+BgkfvXSuBBQGJAYZWAFnhOmeXl2uMZqZXJ/Umk5pcf2TqUB0sCqA y535fqD9uw8x5O3esMaspudbZYxyg8sZ5S+O86rjIDnn24Zy2wqWvZmw3XxnX+58NG MPG6HJ1UmXHGSoxX0kAlG7EIfGYOCzs4kKrO8CJo= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com [IPv6:2a00:1450:4864:20::631]) by sourceware.org (Postfix) with ESMTPS id 5C3A0385842C for ; Wed, 7 Jun 2023 18:18:15 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 5C3A0385842C Received: by mail-ej1-x631.google.com with SMTP id a640c23a62f3a-977e83d536fso516286666b.3 for ; Wed, 07 Jun 2023 11:18:15 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686161893; x=1688753893; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=whiwzI0MeMKHl3vnq5nMfl6okllvkyhwQ2me3N5G63I=; b=LZHQqFsOUxVQEYHkZspYq5iVNxW2N3Rm+NiDRwEI9MMT8YG324X4gUwcg5jx4EpZwo Nq3PLk0qc3T3Fu72pH0zkYgJghuuI1IgxtD+RXjlC8RnhWWuzHzk9IbNe4mdahDxM522 w3m2b/v8w7VyuyEkbEdM2DNg3n3ftmW8O0j5B0XxAkRIY7T90GRhYKrK5W/PVnMcukyt fS7XMlbg3HoySbTap2WZiBBeVWRkwr2ft7uqKadw3XhKcLdSJ/PK9s+cg0xs8tcInuBd 8VLmrm/K0WsMwihh4P1+K/PbFbvEcxiIHmRcRsqvn9/Qw1N0LBLlErSHWNPntnwH1G+9 gpOQ== X-Gm-Message-State: AC+VfDxArk7c3AorWHFpNFx7ORGmZE5qjyEAAroxfwfX7SpTzS5Zvskh WOab56SbhoWwvi4t65RFgX87UHUEh9c= X-Google-Smtp-Source: ACHHUZ6rOJAZ0sIoL5+tUzVdLgo53RE2f92bP9ReyhJniDjUZ7AcLfiZujmbdLYo43ks8M8ZzJ/8CA== X-Received: by 2002:a17:907:1c20:b0:977:d020:53d6 with SMTP id nc32-20020a1709071c2000b00977d02053d6mr7369965ejc.44.1686161893245; Wed, 07 Jun 2023 11:18:13 -0700 (PDT) Received: from noahgold-desk.intel.com (2603-8080-1301-76c6-bbb0-ef3c-a689-4ab7.res6.spectrum.com. [2603:8080:1301:76c6:bbb0:ef3c:a689:4ab7]) by smtp.gmail.com with ESMTPSA id i17-20020a170906851100b009746023de34sm7162985ejx.150.2023.06.07.11.18.10 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jun 2023 11:18:12 -0700 (PDT) To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org, DJ Delorie Subject: [PATCH v11 2/3] x86: Refactor Intel `init_cpu_features` Date: Wed, 7 Jun 2023 13:18:02 -0500 Message-Id: <20230607181803.4154764-2-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230607181803.4154764-1-goldstein.w.n@gmail.com> References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> <20230607181803.4154764-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" This patch should have no affect on existing functionality. The current code, which has a single switch for model detection and setting prefered features, is difficult to follow/extend. The cases use magic numbers and many microarchitectures are missing. This makes it difficult to reason about what is implemented so far and/or how/where to add support for new features. This patch splits the model detection and preference setting stages so that CPU preferences can be set based on a complete list of available microarchitectures, rather than based on model magic numbers. Reviewed-by: DJ Delorie --- sysdeps/x86/cpu-features.c | 390 +++++++++++++++++++++++++++++-------- 1 file changed, 309 insertions(+), 81 deletions(-) diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index 0a99efdb28..d52a718e92 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -417,6 +417,216 @@ _Static_assert (((index_arch_Fast_Unaligned_Load == index_arch_Fast_Copy_Backward)), "Incorrect index_arch_Fast_Unaligned_Load"); + +/* Intel Family-6 microarch list. */ +enum +{ + /* Atom processors. */ + INTEL_ATOM_BONNELL, + INTEL_ATOM_SILVERMONT, + INTEL_ATOM_AIRMONT, + INTEL_ATOM_GOLDMONT, + INTEL_ATOM_GOLDMONT_PLUS, + INTEL_ATOM_SIERRAFOREST, + INTEL_ATOM_GRANDRIDGE, + INTEL_ATOM_TREMONT, + + /* Bigcore processors. */ + INTEL_BIGCORE_MEROM, + INTEL_BIGCORE_PENRYN, + INTEL_BIGCORE_DUNNINGTON, + INTEL_BIGCORE_NEHALEM, + INTEL_BIGCORE_WESTMERE, + INTEL_BIGCORE_SANDYBRIDGE, + INTEL_BIGCORE_IVYBRIDGE, + INTEL_BIGCORE_HASWELL, + INTEL_BIGCORE_BROADWELL, + INTEL_BIGCORE_SKYLAKE, + INTEL_BIGCORE_KABYLAKE, + INTEL_BIGCORE_COMETLAKE, + INTEL_BIGCORE_SKYLAKE_AVX512, + INTEL_BIGCORE_CANNONLAKE, + INTEL_BIGCORE_ICELAKE, + INTEL_BIGCORE_TIGERLAKE, + INTEL_BIGCORE_ROCKETLAKE, + INTEL_BIGCORE_SAPPHIRERAPIDS, + INTEL_BIGCORE_RAPTORLAKE, + INTEL_BIGCORE_EMERALDRAPIDS, + INTEL_BIGCORE_METEORLAKE, + INTEL_BIGCORE_LUNARLAKE, + INTEL_BIGCORE_ARROWLAKE, + INTEL_BIGCORE_GRANITERAPIDS, + + /* Mixed (bigcore + atom SOC). */ + INTEL_MIXED_LAKEFIELD, + INTEL_MIXED_ALDERLAKE, + + /* KNL. */ + INTEL_KNIGHTS_MILL, + INTEL_KNIGHTS_LANDING, + + /* Unknown. */ + INTEL_UNKNOWN, +}; + +static unsigned int +intel_get_fam6_microarch (unsigned int model, + __attribute__ ((unused)) unsigned int stepping) +{ + switch (model) + { + case 0x1C: + case 0x26: + return INTEL_ATOM_BONNELL; + case 0x27: + case 0x35: + case 0x36: + /* Really Saltwell, but Saltwell is just a die shrink of Bonnell + (microarchitecturally identical). */ + return INTEL_ATOM_BONNELL; + case 0x37: + case 0x4A: + case 0x4D: + case 0x5D: + return INTEL_ATOM_SILVERMONT; + case 0x4C: + case 0x5A: + case 0x75: + return INTEL_ATOM_AIRMONT; + case 0x5C: + case 0x5F: + return INTEL_ATOM_GOLDMONT; + case 0x7A: + return INTEL_ATOM_GOLDMONT_PLUS; + case 0xAF: + return INTEL_ATOM_SIERRAFOREST; + case 0xB6: + return INTEL_ATOM_GRANDRIDGE; + case 0x86: + case 0x96: + case 0x9C: + return INTEL_ATOM_TREMONT; + case 0x0F: + case 0x16: + return INTEL_BIGCORE_MEROM; + case 0x17: + return INTEL_BIGCORE_PENRYN; + case 0x1D: + return INTEL_BIGCORE_DUNNINGTON; + case 0x1A: + case 0x1E: + case 0x1F: + case 0x2E: + return INTEL_BIGCORE_NEHALEM; + case 0x25: + case 0x2C: + case 0x2F: + return INTEL_BIGCORE_WESTMERE; + case 0x2A: + case 0x2D: + return INTEL_BIGCORE_SANDYBRIDGE; + case 0x3A: + case 0x3E: + return INTEL_BIGCORE_IVYBRIDGE; + case 0x3C: + case 0x3F: + case 0x45: + case 0x46: + return INTEL_BIGCORE_HASWELL; + case 0x3D: + case 0x47: + case 0x4F: + case 0x56: + return INTEL_BIGCORE_BROADWELL; + case 0x4E: + case 0x5E: + return INTEL_BIGCORE_SKYLAKE; + case 0x8E: + /* + Stepping = {9} + -> Amberlake + Stepping = {10} + -> Coffeelake + Stepping = {11, 12} + -> Whiskeylake + else + -> Kabylake + + All of these are derivatives of Kabylake (Skylake client). + */ + return INTEL_BIGCORE_KABYLAKE; + case 0x9E: + /* + Stepping = {10, 11, 12, 13} + -> Coffeelake + else + -> Kabylake + + Coffeelake is a derivatives of Kabylake (Skylake client). + */ + return INTEL_BIGCORE_KABYLAKE; + case 0xA5: + case 0xA6: + return INTEL_BIGCORE_COMETLAKE; + case 0x66: + return INTEL_BIGCORE_CANNONLAKE; + case 0x55: + /* + Stepping = {6, 7} + -> Cascadelake + Stepping = {11} + -> Cooperlake + else + -> Skylake-avx512 + + These are all microarchitecturally indentical, so use + Skylake-avx512 for all of them. + */ + return INTEL_BIGCORE_SKYLAKE_AVX512; + case 0x6A: + case 0x6C: + case 0x7D: + case 0x7E: + case 0x9D: + return INTEL_BIGCORE_ICELAKE; + case 0x8C: + case 0x8D: + return INTEL_BIGCORE_TIGERLAKE; + case 0xA7: + return INTEL_BIGCORE_ROCKETLAKE; + case 0x8F: + return INTEL_BIGCORE_SAPPHIRERAPIDS; + case 0xB7: + case 0xBA: + case 0xBF: + return INTEL_BIGCORE_RAPTORLAKE; + case 0xCF: + return INTEL_BIGCORE_EMERALDRAPIDS; + case 0xAA: + case 0xAC: + return INTEL_BIGCORE_METEORLAKE; + case 0xbd: + return INTEL_BIGCORE_LUNARLAKE; + case 0xc6: + return INTEL_BIGCORE_ARROWLAKE; + case 0xAD: + case 0xAE: + return INTEL_BIGCORE_GRANITERAPIDS; + case 0x8A: + return INTEL_MIXED_LAKEFIELD; + case 0x97: + case 0x9A: + case 0xBE: + return INTEL_MIXED_ALDERLAKE; + case 0x85: + return INTEL_KNIGHTS_MILL; + case 0x57: + return INTEL_KNIGHTS_LANDING; + default: + return INTEL_UNKNOWN; + } +} + static inline void init_cpu_features (struct cpu_features *cpu_features) { @@ -453,129 +663,147 @@ init_cpu_features (struct cpu_features *cpu_features) if (family == 0x06) { model += extended_model; - switch (model) + unsigned int microarch + = intel_get_fam6_microarch (model, stepping); + + switch (microarch) { - case 0x1c: - case 0x26: - /* BSF is slow on Atom. */ + /* Atom / KNL tuning. */ + case INTEL_ATOM_BONNELL: + /* BSF is slow on Bonnell. */ cpu_features->preferred[index_arch_Slow_BSF] - |= bit_arch_Slow_BSF; + |= bit_arch_Slow_BSF; break; - case 0x57: - /* Knights Landing. Enable Silvermont optimizations. */ - - case 0x7a: - /* Unaligned load versions are faster than SSSE3 - on Goldmont Plus. */ - - case 0x5c: - case 0x5f: /* Unaligned load versions are faster than SSSE3 - on Goldmont. */ + on Airmont, Silvermont, Goldmont, and Goldmont Plus. */ + case INTEL_ATOM_AIRMONT: + case INTEL_ATOM_SILVERMONT: + case INTEL_ATOM_GOLDMONT: + case INTEL_ATOM_GOLDMONT_PLUS: - case 0x4c: - case 0x5a: - case 0x75: - /* Airmont is a die shrink of Silvermont. */ + /* Knights Landing. Enable Silvermont optimizations. */ + case INTEL_KNIGHTS_LANDING: - case 0x37: - case 0x4a: - case 0x4d: - case 0x5d: - /* Unaligned load versions are faster than SSSE3 - on Silvermont. */ cpu_features->preferred[index_arch_Fast_Unaligned_Load] - |= (bit_arch_Fast_Unaligned_Load - | bit_arch_Fast_Unaligned_Copy - | bit_arch_Prefer_PMINUB_for_stringop - | bit_arch_Slow_SSE4_2); + |= (bit_arch_Fast_Unaligned_Load + | bit_arch_Fast_Unaligned_Copy + | bit_arch_Prefer_PMINUB_for_stringop + | bit_arch_Slow_SSE4_2); break; - case 0x86: - case 0x96: - case 0x9c: + case INTEL_ATOM_TREMONT: /* Enable rep string instructions, unaligned load, unaligned - copy, pminub and avoid SSE 4.2 on Tremont. */ + copy, pminub and avoid SSE 4.2 on Tremont. */ cpu_features->preferred[index_arch_Fast_Rep_String] - |= (bit_arch_Fast_Rep_String - | bit_arch_Fast_Unaligned_Load - | bit_arch_Fast_Unaligned_Copy - | bit_arch_Prefer_PMINUB_for_stringop - | bit_arch_Slow_SSE4_2); + |= (bit_arch_Fast_Rep_String + | bit_arch_Fast_Unaligned_Load + | bit_arch_Fast_Unaligned_Copy + | bit_arch_Prefer_PMINUB_for_stringop + | bit_arch_Slow_SSE4_2); break; + /* + Default tuned Knights microarch. + case INTEL_KNIGHTS_MILL: + */ + + /* + Default tuned atom microarch. + case INTEL_ATOM_SIERRAFOREST: + case INTEL_ATOM_GRANDRIDGE: + */ + + /* Bigcore/Default Tuning. */ default: /* Unknown family 0x06 processors. Assuming this is one of Core i3/i5/i7 processors if AVX is available. */ if (!CPU_FEATURES_CPU_P (cpu_features, AVX)) break; /* Fall through. */ - - case 0x1a: - case 0x1e: - case 0x1f: - case 0x25: - case 0x2c: - case 0x2e: - case 0x2f: + case INTEL_BIGCORE_NEHALEM: + case INTEL_BIGCORE_WESTMERE: /* Rep string instructions, unaligned load, unaligned copy, and pminub are fast on Intel Core i3, i5 and i7. */ cpu_features->preferred[index_arch_Fast_Rep_String] - |= (bit_arch_Fast_Rep_String - | bit_arch_Fast_Unaligned_Load - | bit_arch_Fast_Unaligned_Copy - | bit_arch_Prefer_PMINUB_for_stringop); + |= (bit_arch_Fast_Rep_String + | bit_arch_Fast_Unaligned_Load + | bit_arch_Fast_Unaligned_Copy + | bit_arch_Prefer_PMINUB_for_stringop); break; + + /* + Default tuned Bigcore microarch. + case INTEL_BIGCORE_SANDYBRIDGE: + case INTEL_BIGCORE_IVYBRIDGE: + case INTEL_BIGCORE_HASWELL: + case INTEL_BIGCORE_BROADWELL: + case INTEL_BIGCORE_SKYLAKE: + case INTEL_BIGCORE_KABYLAKE: + case INTEL_BIGCORE_COMETLAKE: + case INTEL_BIGCORE_SKYLAKE_AVX512: + case INTEL_BIGCORE_CANNONLAKE: + case INTEL_BIGCORE_ICELAKE: + case INTEL_BIGCORE_TIGERLAKE: + case INTEL_BIGCORE_ROCKETLAKE: + case INTEL_BIGCORE_RAPTORLAKE: + case INTEL_BIGCORE_METEORLAKE: + case INTEL_BIGCORE_LUNARLAKE: + case INTEL_BIGCORE_ARROWLAKE: + case INTEL_BIGCORE_SAPPHIRERAPIDS: + case INTEL_BIGCORE_EMERALDRAPIDS: + case INTEL_BIGCORE_GRANITERAPIDS: + */ + + /* + Default tuned Mixed (bigcore + atom SOC). + case INTEL_MIXED_LAKEFIELD: + case INTEL_MIXED_ALDERLAKE: + */ } - /* Disable TSX on some processors to avoid TSX on kernels that - weren't updated with the latest microcode package (which - disables broken feature by default). */ - switch (model) + /* Disable TSX on some processors to avoid TSX on kernels that + weren't updated with the latest microcode package (which + disables broken feature by default). */ + switch (microarch) { - case 0x55: + case INTEL_BIGCORE_SKYLAKE_AVX512: + /* 0x55 (Skylake-avx512) && stepping <= 5 disable TSX. */ if (stepping <= 5) goto disable_tsx; break; - case 0x8e: - /* NB: Although the errata documents that for model == 0x8e, - only 0xb stepping or lower are impacted, the intention of - the errata was to disable TSX on all client processors on - all steppings. Include 0xc stepping which is an Intel - Core i7-8665U, a client mobile processor. */ - case 0x9e: + + case INTEL_BIGCORE_KABYLAKE: + /* NB: Although the errata documents that for model == 0x8e + (kabylake skylake client), only 0xb stepping or lower are + impacted, the intention of the errata was to disable TSX on + all client processors on all steppings. Include 0xc + stepping which is an Intel Core i7-8665U, a client mobile + processor. */ if (stepping > 0xc) break; /* Fall through. */ - case 0x4e: - case 0x5e: - { + case INTEL_BIGCORE_SKYLAKE: /* Disable Intel TSX and enable RTM_ALWAYS_ABORT for processors listed in: https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html */ -disable_tsx: + disable_tsx: CPU_FEATURE_UNSET (cpu_features, HLE); CPU_FEATURE_UNSET (cpu_features, RTM); CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT); - } - break; - case 0x3f: - /* Xeon E7 v3 with stepping >= 4 has working TSX. */ - if (stepping >= 4) break; - /* Fall through. */ - case 0x3c: - case 0x45: - case 0x46: - /* Disable Intel TSX on Haswell processors (except Xeon E7 v3 - with stepping >= 4) to avoid TSX on kernels that weren't - updated with the latest microcode package (which disables - broken feature by default). */ - CPU_FEATURE_UNSET (cpu_features, RTM); - break; + + case INTEL_BIGCORE_HASWELL: + /* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working + TSX. Haswell also include other model numbers that have + working TSX. */ + if (model == 0x3f && stepping >= 4) + break; + + CPU_FEATURE_UNSET (cpu_features, RTM); + break; } } From patchwork Wed Jun 7 18:18:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 70753 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id BD2E13857437 for ; Wed, 7 Jun 2023 18:19:03 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org BD2E13857437 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1686161943; bh=E1yLFwiJSZ82Lz66O09DEs3TSspjTFhO5R7XkCkLEec=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=SOearxJuyt2fakP04Zg/fdFPOlgSTFeq/y5VuC87izFU0A+lLjQQclFnTdnDC6tKv +Dvp+hYeAoYpZhtbRBnLV7VutcLlGi7RYAUTM5JZkZHkDGrOPUft7jhZFL5gvrFsqa 0HuqO4E2W1AcxMc7Med9Pj20E8Rj+AlcaMTL4vdM= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com [IPv6:2a00:1450:4864:20::633]) by sourceware.org (Postfix) with ESMTPS id 8A4363858296 for ; Wed, 7 Jun 2023 18:18:19 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 8A4363858296 Received: by mail-ej1-x633.google.com with SMTP id a640c23a62f3a-9786c67ec32so401408966b.1 for ; Wed, 07 Jun 2023 11:18:19 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686161897; x=1688753897; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=E1yLFwiJSZ82Lz66O09DEs3TSspjTFhO5R7XkCkLEec=; b=SaTtxSIRc+KNT0YlH2v3wBChzfdXu6TnafNl2Vhp1LZUytUIxdb/snTFtsXLpEB3if rUp44cZ5RIb/pFvB3Pt+eBYQQcA9TbnamyPRbrBe3i5GjOcAbhGLJP0AZXbSuCC6Jd3b N2JMgiq1YB4/nrk4cBNiF7gR8I7PjsGtTdH5KsSKg+EW0jN+TObGxajET1SYNNMvpYN0 vbzYxcrUS2TMvk4j40CvWxvZdi1ymKwZfh5VFBPH/kw6w56Drm/npF9RsZsjDpMtAlTH RedZeJV/euNBOLTcmYO4c1CAb9Fwynpr0v0833BoPznI88i12rwzpeggFaXUP46JekiM hdBw== X-Gm-Message-State: AC+VfDw8UxX9oSwJ7zI5SdtsWkEtC3AteDVLYnGaAdckQMJXsVRIep+B uK64NHHyzpniT063NqyigGAWnCBRMs4= X-Google-Smtp-Source: ACHHUZ411oOciekfOD/jd9vGgLbzGFd/gxJNgywTOqttKcLnF+nCWKiiVXAc+1tBJoXuQObP8ToGdw== X-Received: by 2002:a17:907:c11:b0:974:571f:8d0f with SMTP id ga17-20020a1709070c1100b00974571f8d0fmr6928286ejc.60.1686161897236; Wed, 07 Jun 2023 11:18:17 -0700 (PDT) Received: from noahgold-desk.intel.com (2603-8080-1301-76c6-bbb0-ef3c-a689-4ab7.res6.spectrum.com. [2603:8080:1301:76c6:bbb0:ef3c:a689:4ab7]) by smtp.gmail.com with ESMTPSA id i17-20020a170906851100b009746023de34sm7162985ejx.150.2023.06.07.11.18.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 07 Jun 2023 11:18:16 -0700 (PDT) To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org, DJ Delorie Subject: [PATCH v11 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Date: Wed, 7 Jun 2023 13:18:03 -0500 Message-Id: <20230607181803.4154764-3-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230607181803.4154764-1-goldstein.w.n@gmail.com> References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> <20230607181803.4154764-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" Different systems prefer a different divisors. From benchmarks[1] so far the following divisors have been found: ICX : 2 SKX : 2 BWD : 8 For Intel, we are generalizing that BWD and older prefers 8 as a divisor, and SKL and newer prefers 2. This number can be further tuned as benchmarks are run. [1]: https://github.com/goldsteinn/memcpy-nt-benchmarks Reviewed-by: DJ Delorie --- sysdeps/x86/cpu-features.c | 31 ++++++++++++++++++++--------- sysdeps/x86/dl-cacheinfo.h | 32 ++++++++++++++++++------------ sysdeps/x86/dl-diagnostics-cpu.c | 11 ++++++---- sysdeps/x86/include/cpu-features.h | 3 +++ 4 files changed, 51 insertions(+), 26 deletions(-) diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index d52a718e92..525828f59c 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -636,6 +636,7 @@ init_cpu_features (struct cpu_features *cpu_features) unsigned int stepping = 0; enum cpu_features_kind kind; + cpu_features->cachesize_non_temporal_divisor = 4; #if !HAS_CPUID if (__get_cpuid_max (0, 0) == 0) { @@ -716,13 +717,13 @@ init_cpu_features (struct cpu_features *cpu_features) /* Bigcore/Default Tuning. */ default: + default_tuning: /* Unknown family 0x06 processors. Assuming this is one of Core i3/i5/i7 processors if AVX is available. */ if (!CPU_FEATURES_CPU_P (cpu_features, AVX)) break; - /* Fall through. */ - case INTEL_BIGCORE_NEHALEM: - case INTEL_BIGCORE_WESTMERE: + + enable_modern_features: /* Rep string instructions, unaligned load, unaligned copy, and pminub are fast on Intel Core i3, i5 and i7. */ cpu_features->preferred[index_arch_Fast_Rep_String] @@ -732,12 +733,23 @@ init_cpu_features (struct cpu_features *cpu_features) | bit_arch_Prefer_PMINUB_for_stringop); break; - /* - Default tuned Bigcore microarch. + case INTEL_BIGCORE_NEHALEM: + case INTEL_BIGCORE_WESTMERE: + /* Older CPUs prefer non-temporal stores at lower threshold. */ + cpu_features->cachesize_non_temporal_divisor = 8; + goto enable_modern_features; + + /* Older Bigcore microarch (smaller non-temporal store + threshold). */ case INTEL_BIGCORE_SANDYBRIDGE: case INTEL_BIGCORE_IVYBRIDGE: case INTEL_BIGCORE_HASWELL: case INTEL_BIGCORE_BROADWELL: + cpu_features->cachesize_non_temporal_divisor = 8; + goto default_tuning; + + /* Newer Bigcore microarch (larger non-temporal store + threshold). */ case INTEL_BIGCORE_SKYLAKE: case INTEL_BIGCORE_KABYLAKE: case INTEL_BIGCORE_COMETLAKE: @@ -753,13 +765,14 @@ init_cpu_features (struct cpu_features *cpu_features) case INTEL_BIGCORE_SAPPHIRERAPIDS: case INTEL_BIGCORE_EMERALDRAPIDS: case INTEL_BIGCORE_GRANITERAPIDS: - */ + cpu_features->cachesize_non_temporal_divisor = 2; + goto default_tuning; - /* - Default tuned Mixed (bigcore + atom SOC). + /* Default tuned Mixed (bigcore + atom SOC). */ case INTEL_MIXED_LAKEFIELD: case INTEL_MIXED_ALDERLAKE: - */ + cpu_features->cachesize_non_temporal_divisor = 2; + goto default_tuning; } /* Disable TSX on some processors to avoid TSX on kernels that diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h index 3bd3b3ec1b..fb1a6cf4a9 100644 --- a/sysdeps/x86/dl-cacheinfo.h +++ b/sysdeps/x86/dl-cacheinfo.h @@ -738,19 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) cpu_features->level3_cache_linesize = level3_cache_linesize; cpu_features->level4_cache_size = level4_cache_size; - /* The default setting for the non_temporal threshold is 1/4 of size - of the chip's cache. For most Intel and AMD processors with an - initial release date between 2017 and 2023, a thread's typical - share of the cache is from 18-64MB. Using the 1/4 L3 is meant to - estimate the point where non-temporal stores begin out-competing - REP MOVSB. As well the point where the fact that non-temporal - stores are forced back to main memory would already occurred to the - majority of the lines in the copy. Note, concerns about the - entire L3 cache being evicted by the copy are mostly alleviated - by the fact that modern HW detects streaming patterns and - provides proper LRU hints so that the maximum thrashing - capped at 1/associativity. */ - unsigned long int non_temporal_threshold = shared / 4; + unsigned long int cachesize_non_temporal_divisor + = cpu_features->cachesize_non_temporal_divisor; + if (cachesize_non_temporal_divisor <= 0) + cachesize_non_temporal_divisor = 4; + + /* The default setting for the non_temporal threshold is [1/8, 1/2] of size + of the chip's cache (depending on `cachesize_non_temporal_divisor` which + is microarch specific. The defeault is 1/4). For most Intel and AMD + processors with an initial release date between 2017 and 2023, a thread's + typical share of the cache is from 18-64MB. Using a reasonable size + fraction of L3 is meant to estimate the point where non-temporal stores + begin out-competing REP MOVSB. As well the point where the fact that + non-temporal stores are forced back to main memory would already occurred + to the majority of the lines in the copy. Note, concerns about the entire + L3 cache being evicted by the copy are mostly alleviated by the fact that + modern HW detects streaming patterns and provides proper LRU hints so that + the maximum thrashing capped at 1/associativity. */ + unsigned long int non_temporal_threshold + = shared / cachesize_non_temporal_divisor; /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run a higher risk of actually thrashing the cache as they don't have a HW LRU hint. As well, their performance in highly parallel situations is diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c index a1578e4665..5aab63e532 100644 --- a/sysdeps/x86/dl-diagnostics-cpu.c +++ b/sysdeps/x86/dl-diagnostics-cpu.c @@ -113,8 +113,11 @@ _dl_diagnostics_cpu (void) cpu_features->level3_cache_linesize); print_cpu_features_value ("level4_cache_size", cpu_features->level4_cache_size); - _Static_assert (offsetof (struct cpu_features, level4_cache_size) - + sizeof (cpu_features->level4_cache_size) - == sizeof (*cpu_features), - "last cpu_features field has been printed"); + print_cpu_features_value ("cachesize_non_temporal_divisor", + cpu_features->cachesize_non_temporal_divisor); + _Static_assert ( + offsetof (struct cpu_features, cachesize_non_temporal_divisor) + + sizeof (cpu_features->cachesize_non_temporal_divisor) + == sizeof (*cpu_features), + "last cpu_features field has been printed"); } diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h index 40b8129d6a..c740e1a5fc 100644 --- a/sysdeps/x86/include/cpu-features.h +++ b/sysdeps/x86/include/cpu-features.h @@ -945,6 +945,9 @@ struct cpu_features unsigned long int level3_cache_linesize; /* /_SC_LEVEL4_CACHE_SIZE. */ unsigned long int level4_cache_size; + /* When no user non_temporal_threshold is specified. We default to + cachesize / cachesize_non_temporal_divisor. */ + unsigned long int cachesize_non_temporal_divisor; }; /* Get a pointer to the CPU features structure. */