From patchwork Sat May 27 18:46:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 70188 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 16C563846436 for ; Sat, 27 May 2023 18:47:16 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 16C563846436 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1685213236; bh=VWDnUH35jp3t/QGvHoxk/dBnOvYBayDdrrgbcsK5Bhk=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=K4ViS+PeCvm9IWMbEqCU1Gwoum0tI3+gtWbTOVibppX4u8Z82kxnuuqV2o/HYAuay 53kPFrShhvGkf7XXGg8b1Q33kdUOPibF0cXj6ML4k+Hlo78iO92WxR3N838vYxzwE1 GlkddBxQ9Ai+r771haQlekKeN5d0iViN9VSF1e3g= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-ot1-x332.google.com (mail-ot1-x332.google.com [IPv6:2607:f8b0:4864:20::332]) by sourceware.org (Postfix) with ESMTPS id ECE2B3856961 for ; Sat, 27 May 2023 18:46:39 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org ECE2B3856961 Received: by mail-ot1-x332.google.com with SMTP id 46e09a7af769-6af89b0f3e5so2022009a34.0 for ; Sat, 27 May 2023 11:46:39 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685213199; x=1687805199; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VWDnUH35jp3t/QGvHoxk/dBnOvYBayDdrrgbcsK5Bhk=; b=JYrWPNB06uotHv8n9DCOLd0+dQpRDd20faeFjeZOii4t3+cJw2wlJLWp0pogYiTTyJ BDf7sdYXbaIklmTQAu7THLMRjy9tHJBJ4sxlFs2t3EhCvZt8Afk3j3zdkDpbmHHbFcNN ek/DOO07ShmEDUAVB+FIYXKxFPPQTTbhBkAu5vzr4aB8c1hcCQn9+kCpb2H9ey+mt6S3 V0813EnhN3A+Pxc51/8FeqnvVVFXGjatBt17GJso1Xp/4VevOVCGdY0+PRxWxXG6d763 cLZyppUZy9bmxfd7kbt7LWnZZxWUSsKqbaRgH/W8dPM9qg9lGLSfj3HeKQ3X1rm/l8u2 JFBA== X-Gm-Message-State: AC+VfDz1M03ojy1cLzqlWdYgHFlxElXOY51sjfFa1a506POjePLRB74H QQDKmlbx1107sIEP4ehVyLXl2R0JR68= X-Google-Smtp-Source: ACHHUZ5mj1eX7hh9BW48EX805cXo7GI32UyCa6FrCZ2w6ln0pXyFTIsMumiQY4OScxdG9pCUDfnRpg== X-Received: by 2002:a9d:6d84:0:b0:6af:7f94:5f15 with SMTP id x4-20020a9d6d84000000b006af7f945f15mr2926792otp.20.1685213198767; Sat, 27 May 2023 11:46:38 -0700 (PDT) Received: from noahgold-desk.lan (2603-8080-1301-76c6-a18c-4c3e-d945-f3a4.res6.spectrum.com. [2603:8080:1301:76c6:a18c:4c3e:d945:f3a4]) by smtp.gmail.com with ESMTPSA id y19-20020a056830109300b006acfdbdf37csm2947330oto.31.2023.05.27.11.46.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 27 May 2023 11:46:38 -0700 (PDT) To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org Subject: [PATCH v10 3/3] x86: Make the divisor in setting `non_temporal_threshold` cpu specific Date: Sat, 27 May 2023 13:46:32 -0500 Message-Id: <20230527184632.694761-3-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230527184632.694761-1-goldstein.w.n@gmail.com> References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> <20230527184632.694761-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" Different systems prefer a different divisors. From benchmarks[1] so far the following divisors have been found: ICX : 2 SKX : 2 BWD : 8 For Intel, we are generalizing that BWD and older prefers 8 as a divisor, and SKL and newer prefers 2. This number can be further tuned as benchmarks are run. [1]: https://github.com/goldsteinn/memcpy-nt-benchmarks Reviewed-by: DJ Delorie --- sysdeps/x86/cpu-features.c | 31 ++++++++++++++++++++--------- sysdeps/x86/dl-cacheinfo.h | 32 ++++++++++++++++++------------ sysdeps/x86/dl-diagnostics-cpu.c | 11 ++++++---- sysdeps/x86/include/cpu-features.h | 3 +++ 4 files changed, 51 insertions(+), 26 deletions(-) diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index 1b6e00c88f..325ec2b825 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -636,6 +636,7 @@ init_cpu_features (struct cpu_features *cpu_features) unsigned int stepping = 0; enum cpu_features_kind kind; + cpu_features->cachesize_non_temporal_divisor = 4; #if !HAS_CPUID if (__get_cpuid_max (0, 0) == 0) { @@ -716,13 +717,13 @@ init_cpu_features (struct cpu_features *cpu_features) /* Bigcore/Default Tuning. */ default: + default_tuning: /* Unknown family 0x06 processors. Assuming this is one of Core i3/i5/i7 processors if AVX is available. */ if (!CPU_FEATURES_CPU_P (cpu_features, AVX)) break; - /* Fall through. */ - case INTEL_BIGCORE_NEHALEM: - case INTEL_BIGCORE_WESTMERE: + + enable_modern_features: /* Rep string instructions, unaligned load, unaligned copy, and pminub are fast on Intel Core i3, i5 and i7. */ cpu_features->preferred[index_arch_Fast_Rep_String] @@ -732,12 +733,23 @@ init_cpu_features (struct cpu_features *cpu_features) | bit_arch_Prefer_PMINUB_for_stringop); break; - /* - Default tuned Bigcore microarch. + case INTEL_BIGCORE_NEHALEM: + case INTEL_BIGCORE_WESTMERE: + /* Older CPUs prefer non-temporal stores at lower threshold. */ + cpu_features->cachesize_non_temporal_divisor = 8; + goto enable_modern_features; + + /* Older Bigcore microarch (smaller non-temporal store + threshold). */ case INTEL_BIGCORE_SANDYBRIDGE: case INTEL_BIGCORE_IVYBRIDGE: case INTEL_BIGCORE_HASWELL: case INTEL_BIGCORE_BROADWELL: + cpu_features->cachesize_non_temporal_divisor = 8; + goto default_tuning; + + /* Newer Bigcore microarch (larger non-temporal store + threshold). */ case INTEL_BIGCORE_SKYLAKE: case INTEL_BIGCORE_KABYLAKE: case INTEL_BIGCORE_COMETLAKE: @@ -753,13 +765,14 @@ init_cpu_features (struct cpu_features *cpu_features) case INTEL_BIGCORE_SAPPHIRERAPIDS: case INTEL_BIGCORE_EMERALDRAPIDS: case INTEL_BIGCORE_GRANITERAPIDS: - */ + cpu_features->cachesize_non_temporal_divisor = 2; + goto default_tuning; - /* - Default tuned Mixed (bigcore + atom SOC). + /* Default tuned Mixed (bigcore + atom SOC). */ case INTEL_MIXED_LAKEFIELD: case INTEL_MIXED_ALDERLAKE: - */ + cpu_features->cachesize_non_temporal_divisor = 2; + goto default_tuning; } /* Disable TSX on some processors to avoid TSX on kernels that diff --git a/sysdeps/x86/dl-cacheinfo.h b/sysdeps/x86/dl-cacheinfo.h index 4a1a5423ff..8292a4a50d 100644 --- a/sysdeps/x86/dl-cacheinfo.h +++ b/sysdeps/x86/dl-cacheinfo.h @@ -738,19 +738,25 @@ dl_init_cacheinfo (struct cpu_features *cpu_features) cpu_features->level3_cache_linesize = level3_cache_linesize; cpu_features->level4_cache_size = level4_cache_size; - /* The default setting for the non_temporal threshold is 1/4 of size - of the chip's cache. For most Intel and AMD processors with an - initial release date between 2017 and 2023, a thread's typical - share of the cache is from 18-64MB. Using the 1/4 L3 is meant to - estimate the point where non-temporal stores begin outcompeting - REP MOVSB. As well the point where the fact that non-temporal - stores are forced back to main memory would already occurred to the - majority of the lines in the copy. Note, concerns about the - entire L3 cache being evicted by the copy are mostly alleviated - by the fact that modern HW detects streaming patterns and - provides proper LRU hints so that the maximum thrashing - capped at 1/associativity. */ - unsigned long int non_temporal_threshold = shared / 4; + unsigned long int cachesize_non_temporal_divisor + = cpu_features->cachesize_non_temporal_divisor; + if (cachesize_non_temporal_divisor <= 0) + cachesize_non_temporal_divisor = 4; + + /* The default setting for the non_temporal threshold is [1/8, 1/2] of size + of the chip's cache (depending on `cachesize_non_temporal_divisor` which + is microarch specific. The defeault is 1/4). For most Intel and AMD + processors with an initial release date between 2017 and 2023, a thread's + typical share of the cache is from 18-64MB. Using a reasonable size + fraction of L3 is meant to estimate the point where non-temporal stores + begin outcompeting REP MOVSB. As well the point where the fact that + non-temporal stores are forced back to main memory would already occurred + to the majority of the lines in the copy. Note, concerns about the entire + L3 cache being evicted by the copy are mostly alleviated by the fact that + modern HW detects streaming patterns and provides proper LRU hints so that + the maximum thrashing capped at 1/associativity. */ + unsigned long int non_temporal_threshold + = shared / cachesize_non_temporal_divisor; /* If no ERMS, we use the per-thread L3 chunking. Normal cacheable stores run a higher risk of actually thrashing the cache as they don't have a HW LRU hint. As well, there performance in highly parallel situations is diff --git a/sysdeps/x86/dl-diagnostics-cpu.c b/sysdeps/x86/dl-diagnostics-cpu.c index a1578e4665..5aab63e532 100644 --- a/sysdeps/x86/dl-diagnostics-cpu.c +++ b/sysdeps/x86/dl-diagnostics-cpu.c @@ -113,8 +113,11 @@ _dl_diagnostics_cpu (void) cpu_features->level3_cache_linesize); print_cpu_features_value ("level4_cache_size", cpu_features->level4_cache_size); - _Static_assert (offsetof (struct cpu_features, level4_cache_size) - + sizeof (cpu_features->level4_cache_size) - == sizeof (*cpu_features), - "last cpu_features field has been printed"); + print_cpu_features_value ("cachesize_non_temporal_divisor", + cpu_features->cachesize_non_temporal_divisor); + _Static_assert ( + offsetof (struct cpu_features, cachesize_non_temporal_divisor) + + sizeof (cpu_features->cachesize_non_temporal_divisor) + == sizeof (*cpu_features), + "last cpu_features field has been printed"); } diff --git a/sysdeps/x86/include/cpu-features.h b/sysdeps/x86/include/cpu-features.h index 40b8129d6a..c740e1a5fc 100644 --- a/sysdeps/x86/include/cpu-features.h +++ b/sysdeps/x86/include/cpu-features.h @@ -945,6 +945,9 @@ struct cpu_features unsigned long int level3_cache_linesize; /* /_SC_LEVEL4_CACHE_SIZE. */ unsigned long int level4_cache_size; + /* When no user non_temporal_threshold is specified. We default to + cachesize / cachesize_non_temporal_divisor. */ + unsigned long int cachesize_non_temporal_divisor; }; /* Get a pointer to the CPU features structure. */