From patchwork Sun Oct 2 12:34:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aurelien Jarno X-Patchwork-Id: 58269 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id E1BAF3851AAB for ; Sun, 2 Oct 2022 12:35:30 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from hall.aurel32.net (hall.aurel32.net [IPv6:2001:bc8:30d7:100::1]) by sourceware.org (Postfix) with ESMTPS id AFAFB385841F for ; Sun, 2 Oct 2022 12:34:30 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org AFAFB385841F Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=aurel32.net Authentication-Results: sourceware.org; spf=none smtp.mailfrom=aurel32.net DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=aurel32.net ; s=202004.hall; h=Content-Transfer-Encoding:MIME-Version:References: In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Content-Type:From:Reply-To: Subject:Content-ID:Content-Description:X-Debbugs-Cc; bh=XyImlBJHW1mIgPwtbaMbfz5fPyxGHL+XiFJRKau7lLQ=; b=I0AOTss2s92FYjCf/I/xYrQ5QB z5KHd1XIss1nf7U6TfAT3okqVks2KPmWYfmM3nQcQskfA5PRw5AaJ1tXdeTsk75biYrELMrvqAq18 e5GC9OhBKWgPRWXXs6+BCJWbi9AiTUZvps26QdUajKuTErIVaLPEqehUO+4WINVzq5rYQoU644+Hw AuXqQQ38PUaFEeOtSjdzYP2qQp+rY3MaOkpVYUsvITlUH2ZEknyIhJsH1tsk+Sr5coAJqfRqxCQ4c YgGYEBUTClW2wh/YYIaRQHhFuew+HioS8TjMNLwgOxGVNxPjCGJ1rAE43Wl7EiFeLDnM9itFeYeof UmoUYY6Q==; Received: from [2a01:e34:ec5d:a741:8a4c:7c4e:dc4c:1787] (helo=ohm.rr44.fr) by hall.aurel32.net with esmtpsa (TLS1.3) tls TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2) (envelope-from ) id 1oeyAu-00FLUA-Ij; Sun, 02 Oct 2022 14:34:28 +0200 Received: from aurel32 by ohm.rr44.fr with local (Exim 4.96) (envelope-from ) id 1oeyAt-00CvG1-2T; Sun, 02 Oct 2022 14:34:27 +0200 From: Aurelien Jarno To: libc-alpha@sourceware.org Subject: [PATCH v2 2/6] x86-64: Require BMI2 for AVX2 str*cmp and wcs(n)cmp implementations Date: Sun, 2 Oct 2022 14:34:20 +0200 Message-Id: <20221002123424.3079805-3-aurelien@aurel32.net> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221002123424.3079805-1-aurelien@aurel32.net> References: <20221002123424.3079805-1-aurelien@aurel32.net> MIME-Version: 1.0 X-Spam-Status: No, score=-13.2 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, SPF_HELO_PASS, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Aurelien Jarno Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" The AVX2 str*cmp and wcs(n)cmp implementations use the 'bzhi' instruction, which belongs to the BMI2 CPU feature. NB: It also uses the 'tzcnt' BMI1 instruction, but it is executed as BSF as BSF if the CPU doesn't support TZCNT, and produces the same result for non-zero input. Fixes: b77b06e0e296 ("x86: Optimize strcmp-avx2.S") Partially resolves: BZ #29611 --- sysdeps/x86_64/multiarch/ifunc-impl-list.c | 47 +++++++++++++++------ sysdeps/x86_64/multiarch/ifunc-strcasecmp.h | 1 + sysdeps/x86_64/multiarch/strcmp.c | 4 +- sysdeps/x86_64/multiarch/strncmp.c | 4 +- 4 files changed, 39 insertions(+), 17 deletions(-) diff --git a/sysdeps/x86_64/multiarch/ifunc-impl-list.c b/sysdeps/x86_64/multiarch/ifunc-impl-list.c index a71444eccb..fec8790c11 100644 --- a/sysdeps/x86_64/multiarch/ifunc-impl-list.c +++ b/sysdeps/x86_64/multiarch/ifunc-impl-list.c @@ -448,13 +448,16 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL (i, name, strcasecmp, X86_IFUNC_IMPL_ADD_V4 (array, i, strcasecmp, (CPU_FEATURE_USABLE (AVX512VL) - && CPU_FEATURE_USABLE (AVX512BW)), + && CPU_FEATURE_USABLE (AVX512BW) + && CPU_FEATURE_USABLE (BMI2)), __strcasecmp_evex) X86_IFUNC_IMPL_ADD_V3 (array, i, strcasecmp, - CPU_FEATURE_USABLE (AVX2), + (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2)), __strcasecmp_avx2) X86_IFUNC_IMPL_ADD_V3 (array, i, strcasecmp, (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2) && CPU_FEATURE_USABLE (RTM)), __strcasecmp_avx2_rtm) X86_IFUNC_IMPL_ADD_V2 (array, i, strcasecmp, @@ -470,13 +473,16 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL (i, name, strcasecmp_l, X86_IFUNC_IMPL_ADD_V4 (array, i, strcasecmp, (CPU_FEATURE_USABLE (AVX512VL) - && CPU_FEATURE_USABLE (AVX512BW)), + && CPU_FEATURE_USABLE (AVX512BW) + && CPU_FEATURE_USABLE (BMI2)), __strcasecmp_l_evex) X86_IFUNC_IMPL_ADD_V3 (array, i, strcasecmp, - CPU_FEATURE_USABLE (AVX2), + (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2)), __strcasecmp_l_avx2) X86_IFUNC_IMPL_ADD_V3 (array, i, strcasecmp, (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2) && CPU_FEATURE_USABLE (RTM)), __strcasecmp_l_avx2_rtm) X86_IFUNC_IMPL_ADD_V2 (array, i, strcasecmp_l, @@ -585,10 +591,12 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, && CPU_FEATURE_USABLE (BMI2)), __strcmp_evex) X86_IFUNC_IMPL_ADD_V3 (array, i, strcmp, - CPU_FEATURE_USABLE (AVX2), + (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2)), __strcmp_avx2) X86_IFUNC_IMPL_ADD_V3 (array, i, strcmp, (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2) && CPU_FEATURE_USABLE (RTM)), __strcmp_avx2_rtm) X86_IFUNC_IMPL_ADD_V2 (array, i, strcmp, @@ -638,13 +646,16 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL (i, name, strncasecmp, X86_IFUNC_IMPL_ADD_V4 (array, i, strncasecmp, (CPU_FEATURE_USABLE (AVX512VL) - && CPU_FEATURE_USABLE (AVX512BW)), + && CPU_FEATURE_USABLE (AVX512BW) + && CPU_FEATURE_USABLE (BMI2)), __strncasecmp_evex) X86_IFUNC_IMPL_ADD_V3 (array, i, strncasecmp, - CPU_FEATURE_USABLE (AVX2), + (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2)), __strncasecmp_avx2) X86_IFUNC_IMPL_ADD_V3 (array, i, strncasecmp, (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2) && CPU_FEATURE_USABLE (RTM)), __strncasecmp_avx2_rtm) X86_IFUNC_IMPL_ADD_V2 (array, i, strncasecmp, @@ -660,13 +671,16 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL (i, name, strncasecmp_l, X86_IFUNC_IMPL_ADD_V4 (array, i, strncasecmp, (CPU_FEATURE_USABLE (AVX512VL) - && CPU_FEATURE_USABLE (AVX512BW)), + & CPU_FEATURE_USABLE (AVX512BW) + && CPU_FEATURE_USABLE (BMI2)), __strncasecmp_l_evex) X86_IFUNC_IMPL_ADD_V3 (array, i, strncasecmp, - CPU_FEATURE_USABLE (AVX2), + (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2)), __strncasecmp_l_avx2) X86_IFUNC_IMPL_ADD_V3 (array, i, strncasecmp, (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2) && CPU_FEATURE_USABLE (RTM)), __strncasecmp_l_avx2_rtm) X86_IFUNC_IMPL_ADD_V2 (array, i, strncasecmp_l, @@ -796,10 +810,12 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, && CPU_FEATURE_USABLE (BMI2)), __wcscmp_evex) X86_IFUNC_IMPL_ADD_V3 (array, i, wcscmp, - CPU_FEATURE_USABLE (AVX2), + (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2)), __wcscmp_avx2) X86_IFUNC_IMPL_ADD_V3 (array, i, wcscmp, (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2) && CPU_FEATURE_USABLE (RTM)), __wcscmp_avx2_rtm) /* ISA V2 wrapper for SSE2 implementation because the SSE2 @@ -816,10 +832,12 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, && CPU_FEATURE_USABLE (BMI2)), __wcsncmp_evex) X86_IFUNC_IMPL_ADD_V3 (array, i, wcsncmp, - CPU_FEATURE_USABLE (AVX2), + (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2)), __wcsncmp_avx2) X86_IFUNC_IMPL_ADD_V3 (array, i, wcsncmp, (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2) && CPU_FEATURE_USABLE (RTM)), __wcsncmp_avx2_rtm) /* ISA V2 wrapper for GENERIC implementation because the @@ -1162,13 +1180,16 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL (i, name, strncmp, X86_IFUNC_IMPL_ADD_V4 (array, i, strncmp, (CPU_FEATURE_USABLE (AVX512VL) - && CPU_FEATURE_USABLE (AVX512BW)), + && CPU_FEATURE_USABLE (AVX512BW) + && CPU_FEATURE_USABLE (BMI2)), __strncmp_evex) X86_IFUNC_IMPL_ADD_V3 (array, i, strncmp, - CPU_FEATURE_USABLE (AVX2), + (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2)), __strncmp_avx2) X86_IFUNC_IMPL_ADD_V3 (array, i, strncmp, (CPU_FEATURE_USABLE (AVX2) + && CPU_FEATURE_USABLE (BMI2) && CPU_FEATURE_USABLE (RTM)), __strncmp_avx2_rtm) X86_IFUNC_IMPL_ADD_V2 (array, i, strncmp, diff --git a/sysdeps/x86_64/multiarch/ifunc-strcasecmp.h b/sysdeps/x86_64/multiarch/ifunc-strcasecmp.h index 68646ef199..7622af259c 100644 --- a/sysdeps/x86_64/multiarch/ifunc-strcasecmp.h +++ b/sysdeps/x86_64/multiarch/ifunc-strcasecmp.h @@ -34,6 +34,7 @@ IFUNC_SELECTOR (void) const struct cpu_features *cpu_features = __get_cpu_features (); if (X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, AVX2) + && X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, BMI2) && X86_ISA_CPU_FEATURES_ARCH_P (cpu_features, AVX_Fast_Unaligned_Load, )) { diff --git a/sysdeps/x86_64/multiarch/strcmp.c b/sysdeps/x86_64/multiarch/strcmp.c index fdd5afe3af..9d6c9f66ba 100644 --- a/sysdeps/x86_64/multiarch/strcmp.c +++ b/sysdeps/x86_64/multiarch/strcmp.c @@ -45,12 +45,12 @@ IFUNC_SELECTOR (void) const struct cpu_features *cpu_features = __get_cpu_features (); if (X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, AVX2) + && X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, BMI2) && X86_ISA_CPU_FEATURES_ARCH_P (cpu_features, AVX_Fast_Unaligned_Load, )) { if (X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, AVX512VL) - && X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, AVX512BW) - && X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, BMI2)) + && X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, AVX512BW)) return OPTIMIZE (evex); if (CPU_FEATURE_USABLE_P (cpu_features, RTM)) diff --git a/sysdeps/x86_64/multiarch/strncmp.c b/sysdeps/x86_64/multiarch/strncmp.c index 4ebe4bde30..c4f8b6bbb5 100644 --- a/sysdeps/x86_64/multiarch/strncmp.c +++ b/sysdeps/x86_64/multiarch/strncmp.c @@ -41,12 +41,12 @@ IFUNC_SELECTOR (void) const struct cpu_features *cpu_features = __get_cpu_features (); if (X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, AVX2) + && X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, BMI2) && X86_ISA_CPU_FEATURES_ARCH_P (cpu_features, AVX_Fast_Unaligned_Load, )) { if (X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, AVX512VL) - && X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, AVX512BW) - && X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, BMI2)) + && X86_ISA_CPU_FEATURE_USABLE_P (cpu_features, AVX512BW)) return OPTIMIZE (evex); if (CPU_FEATURE_USABLE_P (cpu_features, RTM))