From patchwork Tue May 9 03:13:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 68934 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id E2E47385773C for ; Tue, 9 May 2023 03:13:51 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org E2E47385773C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1683602031; bh=/6XHM1Cv5uUSWluIVJjtt29Z4BW/Iq9PZDxqQsKcdks=; h=To:Cc:Subject:Date:In-Reply-To:References:List-Id: List-Unsubscribe:List-Archive:List-Post:List-Help:List-Subscribe: From:Reply-To:From; b=E4GoIdFK2Or2dXqnQgevuAf43JgapNhEoCC3L6dYjmXylGqT5m0a745dK7ykDTXyC QTQyoIRmrBXxX3zymsrQ0zd1j32Blmtfl1YPIuMmbaZf+SxdYKygeMxeQ4BXZDKu8u QaUlm5Ceh6qms/VjBaCDNJjM7a3u7FGY0eAr0iAE= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by sourceware.org (Postfix) with ESMTPS id EBC393858C53 for ; Tue, 9 May 2023 03:13:25 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org EBC393858C53 Received: by mail-pl1-x62f.google.com with SMTP id d9443c01a7336-1aaf21bb42bso37505485ad.2 for ; Mon, 08 May 2023 20:13:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1683602005; x=1686194005; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/6XHM1Cv5uUSWluIVJjtt29Z4BW/Iq9PZDxqQsKcdks=; b=jnoXr5JgulfATTF1rhFi0vjJGWPsrSdysHsjN4KcrPAyp5DYPKWW2oR5QFHcyz7lcB VOtZHQjthb9QQqSEFqEeLZ5o5ezITBhA7+B4IWJguXXEueKUvebP5sQ9RmJMeLFJms3P G32ppph7HIIp9O3zy7G7jGV7JYFP5+/vli4MhpbOkwZHfYriLc8VSeLaoOr7PIw+owRu yvYYFxyK9lE+HrlfJdixEvqpNHtDtdoWtBQub3S1bfYevDNo+/2uGuv1+cCs0CI9EABv mzF66hRoUs8tHLR4mRXzyqyfZLqE++Xj+8M5y5Ks46s/03Z/fifL5os9S5nTDax3t0c2 b/bA== X-Gm-Message-State: AC+VfDzeXN+W/GvpKZCXoH/Gx0+fUQBfL0pYwcdDnqyPhVXKz1vrRxiB AXnsj+iMD+NtNIVNFgk53uPR2u2O5jc= X-Google-Smtp-Source: ACHHUZ5x1YgHsXVaUPqH3wxUGiI0FqxX34ZRmsFi0NplP7MC0s6Ui5GKtWO5XjNY2ajY4D0DkKGLPA== X-Received: by 2002:a17:903:41c3:b0:1a9:6b57:f400 with SMTP id u3-20020a17090341c300b001a96b57f400mr15765343ple.16.1683602004551; Mon, 08 May 2023 20:13:24 -0700 (PDT) Received: from noahgold-desk.. ([192.55.60.44]) by smtp.gmail.com with ESMTPSA id r15-20020a170903020f00b001aae625e422sm252962plh.37.2023.05.08.20.13.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 08 May 2023 20:13:24 -0700 (PDT) To: libc-alpha@sourceware.org Cc: goldstein.w.n@gmail.com, hjl.tools@gmail.com, carlos@systemhalted.org Subject: [PATCH v5 2/3] x86: Refactor Intel `init_cpu_features` Date: Mon, 8 May 2023 22:13:12 -0500 Message-Id: <20230509031313.3497001-2-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230509031313.3497001-1-goldstein.w.n@gmail.com> References: <20230424050329.1501348-1-goldstein.w.n@gmail.com> <20230509031313.3497001-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" This patch should have no affect on existing functionality. The current code, which has a single switch for model detection and setting prefered features, is difficult to follow/extend. The cases use magic numbers and many microarchitectures are missing. This makes it difficult to reason about what is implemented so far and/or how/where to add support for new features. This patch splits the model detection and preference setting stages so that CPU preferences can be set based on a complete list of available microarchitectures, rather than based on model magic numbers. --- sysdeps/x86/cpu-features.c | 401 +++++++++++++++++++++++++++++-------- 1 file changed, 316 insertions(+), 85 deletions(-) diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index 5bff8ec0b4..bec70c3c49 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -417,6 +417,217 @@ _Static_assert (((index_arch_Fast_Unaligned_Load == index_arch_Fast_Copy_Backward)), "Incorrect index_arch_Fast_Unaligned_Load"); + +/* Intel Family-6 microarch list. */ +enum +{ + /* Atom processors. */ + INTEL_ATOM_BONNELL, + INTEL_ATOM_SALTWELL, + INTEL_ATOM_SILVERMONT, + INTEL_ATOM_AIRMONT, + INTEL_ATOM_GOLDMONT, + INTEL_ATOM_GOLDMONT_PLUS, + INTEL_ATOM_SIERRAFOREST, + INTEL_ATOM_GRANDRIDGE, + INTEL_ATOM_TREMONT, + + /* Bigcore processors. */ + INTEL_BIGCORE_MEROM, + INTEL_BIGCORE_PENRYN, + INTEL_BIGCORE_DUNNINGTON, + INTEL_BIGCORE_NEHALEM, + INTEL_BIGCORE_WESTMERE, + INTEL_BIGCORE_SANDYBRIDGE, + INTEL_BIGCORE_IVYBRIDGE, + INTEL_BIGCORE_HASWELL, + INTEL_BIGCORE_BROADWELL, + INTEL_BIGCORE_SKYLAKE, + INTEL_BIGCORE_AMBERLAKE, + INTEL_BIGCORE_COFFEELAKE, + INTEL_BIGCORE_WHISKEYLAKE, + INTEL_BIGCORE_KABYLAKE, + INTEL_BIGCORE_COMETLAKE, + INTEL_BIGCORE_SKYLAKE_AVX512, + INTEL_BIGCORE_CANNONLAKE, + INTEL_BIGCORE_CASCADELAKE, + INTEL_BIGCORE_COOPERLAKE, + INTEL_BIGCORE_ICELAKE, + INTEL_BIGCORE_TIGERLAKE, + INTEL_BIGCORE_ROCKETLAKE, + INTEL_BIGCORE_SAPPHIRERAPIDS, + INTEL_BIGCORE_RAPTORLAKE, + INTEL_BIGCORE_EMERALDRAPIDS, + INTEL_BIGCORE_METEORLAKE, + INTEL_BIGCORE_LUNARLAKE, + INTEL_BIGCORE_ARROWLAKE, + INTEL_BIGCORE_GRANITERAPIDS, + + /* Mixed (bigcore + atom SOC). */ + INTEL_MIXED_LAKEFIELD, + INTEL_MIXED_ALDERLAKE, + + /* KNL. */ + INTEL_KNIGHTS_MILL, + INTEL_KNIGHTS_LANDING, + + /* Unknown. */ + INTEL_UNKNOWN, +}; + +static unsigned int +intel_get_fam6_microarch (unsigned int model, unsigned int stepping) +{ + switch (model) + { + case 0x1C: + case 0x26: + return INTEL_ATOM_BONNELL; + case 0x27: + case 0x35: + case 0x36: + return INTEL_ATOM_SALTWELL; + case 0x37: + case 0x4A: + case 0x4D: + case 0x5D: + return INTEL_ATOM_SILVERMONT; + case 0x4C: + case 0x5A: + case 0x75: + return INTEL_ATOM_AIRMONT; + case 0x5C: + case 0x5F: + return INTEL_ATOM_GOLDMONT; + case 0x7A: + return INTEL_ATOM_GOLDMONT_PLUS; + case 0xAF: + return INTEL_ATOM_SIERRAFOREST; + case 0xB6: + return INTEL_ATOM_GRANDRIDGE; + case 0x86: + case 0x96: + case 0x9C: + return INTEL_ATOM_TREMONT; + case 0x0F: + case 0x16: + return INTEL_BIGCORE_MEROM; + case 0x17: + return INTEL_BIGCORE_PENRYN; + case 0x1D: + return INTEL_BIGCORE_DUNNINGTON; + case 0x1A: + case 0x1E: + case 0x1F: + case 0x2E: + return INTEL_BIGCORE_NEHALEM; + case 0x25: + case 0x2C: + case 0x2F: + return INTEL_BIGCORE_WESTMERE; + case 0x2A: + case 0x2D: + return INTEL_BIGCORE_SANDYBRIDGE; + case 0x3A: + case 0x3E: + return INTEL_BIGCORE_IVYBRIDGE; + case 0x3C: + case 0x3F: + case 0x45: + case 0x46: + return INTEL_BIGCORE_HASWELL; + case 0x3D: + case 0x47: + case 0x4F: + case 0x56: + return INTEL_BIGCORE_BROADWELL; + case 0x4E: + case 0x5E: + return INTEL_BIGCORE_SKYLAKE; + case 0x8E: + switch (stepping) + { + case 0x09: + return INTEL_BIGCORE_AMBERLAKE; + case 0x0A: + return INTEL_BIGCORE_COFFEELAKE; + case 0x0B: + case 0x0C: + return INTEL_BIGCORE_WHISKEYLAKE; + default: + return INTEL_BIGCORE_KABYLAKE; + } + case 0x9E: + switch (stepping) + { + case 0x0A: + case 0x0B: + case 0x0C: + case 0x0D: + return INTEL_BIGCORE_COFFEELAKE; + default: + return INTEL_BIGCORE_KABYLAKE; + } + case 0xA5: + case 0xA6: + return INTEL_BIGCORE_COMETLAKE; + case 0x66: + return INTEL_BIGCORE_CANNONLAKE; + case 0x55: + switch (stepping) + { + case 0x06: + case 0x07: + return INTEL_BIGCORE_CASCADELAKE; + case 0x0b: + return INTEL_BIGCORE_COOPERLAKE; + default: + return INTEL_BIGCORE_SKYLAKE_AVX512; + } + case 0x6A: + case 0x6C: + case 0x7D: + case 0x7E: + case 0x9D: + return INTEL_BIGCORE_ICELAKE; + case 0x8C: + case 0x8D: + return INTEL_BIGCORE_TIGERLAKE; + case 0xA7: + return INTEL_BIGCORE_ROCKETLAKE; + case 0x8F: + return INTEL_BIGCORE_SAPPHIRERAPIDS; + case 0xB7: + case 0xBA: + case 0xBF: + return INTEL_BIGCORE_RAPTORLAKE; + case 0xCF: + return INTEL_BIGCORE_EMERALDRAPIDS; + case 0xAA: + case 0xAC: + return INTEL_BIGCORE_METEORLAKE; + case 0xbd: + return INTEL_BIGCORE_LUNARLAKE; + case 0xc6: + return INTEL_BIGCORE_ARROWLAKE; + case 0xAD: + case 0xAE: + return INTEL_BIGCORE_GRANITERAPIDS; + case 0x8A: + return INTEL_MIXED_LAKEFIELD; + case 0x97: + case 0x9A: + case 0xBE: + return INTEL_MIXED_ALDERLAKE; + case 0x85: + return INTEL_KNIGHTS_MILL; + case 0x57: + return INTEL_KNIGHTS_LANDING; + default: + return INTEL_UNKNOWN; + } +} + static inline void init_cpu_features (struct cpu_features *cpu_features) { @@ -453,129 +664,149 @@ init_cpu_features (struct cpu_features *cpu_features) if (family == 0x06) { model += extended_model; - switch (model) + unsigned int microarch + = intel_get_fam6_microarch (model, stepping); + + switch (microarch) { - case 0x1c: - case 0x26: - /* BSF is slow on Atom. */ + /* Atom / KNL tuning. */ + case INTEL_ATOM_BONNELL: + /* BSF is slow on Bonnell. */ cpu_features->preferred[index_arch_Slow_BSF] - |= bit_arch_Slow_BSF; + |= bit_arch_Slow_BSF; break; - case 0x57: - /* Knights Landing. Enable Silvermont optimizations. */ - - case 0x7a: - /* Unaligned load versions are faster than SSSE3 - on Goldmont Plus. */ - - case 0x5c: - case 0x5f: /* Unaligned load versions are faster than SSSE3 - on Goldmont. */ + on Airmont, Silvermont, Goldmont, and Goldmont Plus. */ + case INTEL_ATOM_AIRMONT: + case INTEL_ATOM_SILVERMONT: + case INTEL_ATOM_GOLDMONT: + case INTEL_ATOM_GOLDMONT_PLUS: - case 0x4c: - case 0x5a: - case 0x75: - /* Airmont is a die shrink of Silvermont. */ + /* Knights Landing. Enable Silvermont optimizations. */ + case INTEL_KNIGHTS_LANDING: - case 0x37: - case 0x4a: - case 0x4d: - case 0x5d: - /* Unaligned load versions are faster than SSSE3 - on Silvermont. */ cpu_features->preferred[index_arch_Fast_Unaligned_Load] - |= (bit_arch_Fast_Unaligned_Load - | bit_arch_Fast_Unaligned_Copy - | bit_arch_Prefer_PMINUB_for_stringop - | bit_arch_Slow_SSE4_2); + |= (bit_arch_Fast_Unaligned_Load + | bit_arch_Fast_Unaligned_Copy + | bit_arch_Prefer_PMINUB_for_stringop + | bit_arch_Slow_SSE4_2); break; - case 0x86: - case 0x96: - case 0x9c: + case INTEL_ATOM_TREMONT: /* Enable rep string instructions, unaligned load, unaligned - copy, pminub and avoid SSE 4.2 on Tremont. */ + copy, pminub and avoid SSE 4.2 on Tremont. */ cpu_features->preferred[index_arch_Fast_Rep_String] - |= (bit_arch_Fast_Rep_String - | bit_arch_Fast_Unaligned_Load - | bit_arch_Fast_Unaligned_Copy - | bit_arch_Prefer_PMINUB_for_stringop - | bit_arch_Slow_SSE4_2); + |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load + | bit_arch_Fast_Unaligned_Copy + | bit_arch_Prefer_PMINUB_for_stringop + | bit_arch_Slow_SSE4_2); + break; + + /* Untuned KNL microarch. */ + case INTEL_KNIGHTS_MILL: + /* Untuned atom microarch. */ + case INTEL_ATOM_SIERRAFOREST: + case INTEL_ATOM_GRANDRIDGE: + case INTEL_ATOM_SALTWELL: break; + /* Bigcore Tuning. */ + case INTEL_UNKNOWN: default: /* Unknown family 0x06 processors. Assuming this is one of Core i3/i5/i7 processors if AVX is available. */ if (!CPU_FEATURES_CPU_P (cpu_features, AVX)) break; - /* Fall through. */ - - case 0x1a: - case 0x1e: - case 0x1f: - case 0x25: - case 0x2c: - case 0x2e: - case 0x2f: + case INTEL_BIGCORE_NEHALEM: + case INTEL_BIGCORE_WESTMERE: /* Rep string instructions, unaligned load, unaligned copy, and pminub are fast on Intel Core i3, i5 and i7. */ cpu_features->preferred[index_arch_Fast_Rep_String] - |= (bit_arch_Fast_Rep_String - | bit_arch_Fast_Unaligned_Load - | bit_arch_Fast_Unaligned_Copy - | bit_arch_Prefer_PMINUB_for_stringop); + |= (bit_arch_Fast_Rep_String | bit_arch_Fast_Unaligned_Load + | bit_arch_Fast_Unaligned_Copy + | bit_arch_Prefer_PMINUB_for_stringop); + break; + + /* Untuned Bigcore microarch. */ + case INTEL_BIGCORE_SANDYBRIDGE: + case INTEL_BIGCORE_IVYBRIDGE: + case INTEL_BIGCORE_HASWELL: + case INTEL_BIGCORE_BROADWELL: + case INTEL_BIGCORE_SKYLAKE: + case INTEL_BIGCORE_AMBERLAKE: + case INTEL_BIGCORE_COFFEELAKE: + case INTEL_BIGCORE_WHISKEYLAKE: + case INTEL_BIGCORE_KABYLAKE: + case INTEL_BIGCORE_COMETLAKE: + case INTEL_BIGCORE_SKYLAKE_AVX512: + case INTEL_BIGCORE_CASCADELAKE: + case INTEL_BIGCORE_COOPERLAKE: + case INTEL_BIGCORE_CANNONLAKE: + case INTEL_BIGCORE_ICELAKE: + case INTEL_BIGCORE_TIGERLAKE: + case INTEL_BIGCORE_ROCKETLAKE: + case INTEL_BIGCORE_RAPTORLAKE: + case INTEL_BIGCORE_METEORLAKE: + case INTEL_BIGCORE_LUNARLAKE: + case INTEL_BIGCORE_ARROWLAKE: + case INTEL_BIGCORE_SAPPHIRERAPIDS: + case INTEL_BIGCORE_EMERALDRAPIDS: + case INTEL_BIGCORE_GRANITERAPIDS: + break; + + /* Untuned Mixed (bigcore + atom SOC). */ + case INTEL_MIXED_LAKEFIELD: + case INTEL_MIXED_ALDERLAKE: break; } - /* Disable TSX on some processors to avoid TSX on kernels that - weren't updated with the latest microcode package (which - disables broken feature by default). */ - switch (model) + /* Disable TSX on some processors to avoid TSX on kernels that + weren't updated with the latest microcode package (which + disables broken feature by default). */ + switch (microarch) { - case 0x55: - if (stepping <= 5) + case INTEL_BIGCORE_SKYLAKE_AVX512: + /* 0x55 && stepping <= 5 is SKYLAKE_AVX512. Cascadelake and + Cooperlake also have model == 0x55 so double check the + stepping to be safe. */ + if (model == 0x55 && stepping <= 5) goto disable_tsx; break; - case 0x8e: - /* NB: Although the errata documents that for model == 0x8e, - only 0xb stepping or lower are impacted, the intention of - the errata was to disable TSX on all client processors on - all steppings. Include 0xc stepping which is an Intel - Core i7-8665U, a client mobile processor. */ - case 0x9e: - if (stepping > 0xc) + + case INTEL_BIGCORE_SKYLAKE: + case INTEL_BIGCORE_AMBERLAKE: + case INTEL_BIGCORE_COFFEELAKE: + case INTEL_BIGCORE_WHISKEYLAKE: + case INTEL_BIGCORE_KABYLAKE: + /* NB: Although the errata documents that for model == 0x8e + (skylake client), only 0xb stepping or lower are impacted, + the intention of the errata was to disable TSX on all client + processors on all steppings. Include 0xc stepping which is + an Intel Core i7-8665U, a client mobile processor. */ + if ((model == 0x8e || model == 0x9e) && stepping > 0xc) break; - /* Fall through. */ - case 0x4e: - case 0x5e: - { + /* Disable Intel TSX and enable RTM_ALWAYS_ABORT for processors listed in: https://www.intel.com/content/www/us/en/support/articles/000059422/processors.html */ -disable_tsx: + disable_tsx: CPU_FEATURE_UNSET (cpu_features, HLE); CPU_FEATURE_UNSET (cpu_features, RTM); CPU_FEATURE_SET (cpu_features, RTM_ALWAYS_ABORT); - } - break; - case 0x3f: - /* Xeon E7 v3 with stepping >= 4 has working TSX. */ - if (stepping >= 4) break; - /* Fall through. */ - case 0x3c: - case 0x45: - case 0x46: - /* Disable Intel TSX on Haswell processors (except Xeon E7 v3 - with stepping >= 4) to avoid TSX on kernels that weren't - updated with the latest microcode package (which disables - broken feature by default). */ - CPU_FEATURE_UNSET (cpu_features, RTM); - break; + + case INTEL_BIGCORE_HASWELL: + /* Xeon E7 v3 (model == 0x3f) with stepping >= 4 has working + TSX. Haswell also include other model numbers that have + working TSX. */ + if (model == 0x3f && stepping >= 4) + break; + + CPU_FEATURE_UNSET (cpu_features, RTM); + break; } }