From patchwork Mon May 14 21:56:27 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: leonardo.sandoval.gonzalez@linux.intel.com X-Patchwork-Id: 27264 Received: (qmail 70638 invoked by alias); 14 May 2018 21:57:01 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 70624 invoked by uid 89); 14 May 2018 21:57:00 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-22.6 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, KAM_LAZY_DOMAIN_SECURITY, KAM_NUMSUBJECT, KAM_SHORT, KAM_STOCKGEN autolearn=ham version=3.3.2 spammy=Leonardo, leonardo, R10, SIGNED X-HELO: mga12.intel.com X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 From: leonardo.sandoval.gonzalez@linux.intel.com To: libc-alpha@sourceware.org Cc: hjl.tools@gmail.com, Leonardo Sandoval Subject: [PATCH] x86-64: Optimize strcmp/wcscmp with AVX2 Date: Mon, 14 May 2018 16:56:27 -0500 Message-Id: <20180514215627.32622-1-leonardo.sandoval.gonzalez@linux.intel.com> From: Leonardo Sandoval Optimize x86-64 strcmp/wcscmp with AVX2. It uses vector compare as much as possible. It is comparable with SSE2 strcmp for size <= 8 bytes and up to 6X faster for size > 8 bytes on Skylake. Select AVX2 strcmp/wcscmp on AVX2 machines where vzeroupper is preferred and AVX unaligned load is fast. NB: It uses TZCNT instead of BSF since TZCNT produces the same result as BSF for non-zero input. TZCNT is faster than BSF and is executed as BSF if machine doesn't support TZCNT. * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add strcmp-avx2 and wcscmp-avx2. * sysdeps/x86_64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Add tests for __strcmp_avx2, __wcscmp_avx2 and __wcscmp_sse2. * sysdeps/x86_64/multiarch/strcmp-avx2.S: New file. * sysdeps/x86_64/multiarch/wcscmp-avx2.S: Likewise. * sysdeps/x86_64/multiarch/wcscmp-sse2.S: Likewise. * sysdeps/x86_64/multiarch/wcscmp.c: Likewise. * sysdeps/x86_64/multiarch/strcmp.c (OPTIMIZE (avx2)): New. (IFUNC_SELECTOR): Return OPTIMIZE (avx2) on AVX 2 machines if AVX unaligned load is fast and vzeroupper is preferred. * sysdeps/x86_64/wcscmp.S (__wcscmp): Add alias only if __wcscmp is undefined. Signed-off-by: Leonardo Sandoval Signed-off-by: H.J. Lu --- sysdeps/x86_64/multiarch/Makefile | 3 +- sysdeps/x86_64/multiarch/ifunc-impl-list.c | 10 + sysdeps/x86_64/multiarch/strcmp-avx2.S | 512 +++++++++++++++++++++ sysdeps/x86_64/multiarch/strcmp.c | 6 + sysdeps/x86_64/multiarch/wcscmp-avx2.S | 4 + sysdeps/x86_64/multiarch/wcscmp-sse2.S | 23 + sysdeps/x86_64/multiarch/wcscmp.c | 37 ++ sysdeps/x86_64/wcscmp.S | 2 + 8 files changed, 596 insertions(+), 1 deletion(-) create mode 100644 sysdeps/x86_64/multiarch/strcmp-avx2.S create mode 100644 sysdeps/x86_64/multiarch/wcscmp-avx2.S create mode 100644 sysdeps/x86_64/multiarch/wcscmp-sse2.S create mode 100644 sysdeps/x86_64/multiarch/wcscmp.c diff --git a/sysdeps/x86_64/multiarch/Makefile b/sysdeps/x86_64/multiarch/Makefile index 68257c4017e..856ed98704f 100644 --- a/sysdeps/x86_64/multiarch/Makefile +++ b/sysdeps/x86_64/multiarch/Makefile @@ -6,7 +6,7 @@ ifeq ($(subdir),string) sysdep_routines += strncat-c stpncpy-c strncpy-c \ strcmp-sse2 strcmp-sse2-unaligned strcmp-ssse3 \ - strcmp-sse4_2 \ + strcmp-sse4_2 strcmp-avx2 \ strncmp-sse2 strncmp-ssse3 strncmp-sse4_2 \ memchr-sse2 rawmemchr-sse2 memchr-avx2 rawmemchr-avx2 \ memrchr-sse2 memrchr-avx2 \ @@ -51,6 +51,7 @@ ifeq ($(subdir),wcsmbs) sysdep_routines += wmemcmp-sse4 wmemcmp-ssse3 wmemcmp-c \ wmemcmp-avx2-movbe \ wmemchr-sse2 wmemchr-avx2 \ + wcscmp-sse2 wcscmp-avx2 \ wcscpy-ssse3 wcscpy-c \ wcschr-sse2 wcschr-avx2 \ wcsrchr-sse2 wcsrchr-avx2 \ diff --git a/sysdeps/x86_64/multiarch/ifunc-impl-list.c b/sysdeps/x86_64/multiarch/ifunc-impl-list.c index 7afd674b81c..5af34979d0f 100644 --- a/sysdeps/x86_64/multiarch/ifunc-impl-list.c +++ b/sysdeps/x86_64/multiarch/ifunc-impl-list.c @@ -268,6 +268,9 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/strcmp.c. */ IFUNC_IMPL (i, name, strcmp, + IFUNC_IMPL_ADD (array, i, strcmp, + HAS_ARCH_FEATURE (AVX2_Usable), + __strcmp_avx2) IFUNC_IMPL_ADD (array, i, strcmp, HAS_CPU_FEATURE (SSE4_2), __strcmp_sse42) IFUNC_IMPL_ADD (array, i, strcmp, HAS_CPU_FEATURE (SSSE3), @@ -364,6 +367,13 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, __wcsrchr_avx2) IFUNC_IMPL_ADD (array, i, wcsrchr, 1, __wcsrchr_sse2)) + /* Support sysdeps/x86_64/multiarch/wcscmp.c. */ + IFUNC_IMPL (i, name, wcscmp, + IFUNC_IMPL_ADD (array, i, wcscmp, + HAS_ARCH_FEATURE (AVX2_Usable), + __wcscmp_avx2) + IFUNC_IMPL_ADD (array, i, wcscmp, 1, __wcscmp_sse2)) + /* Support sysdeps/x86_64/multiarch/wcscpy.c. */ IFUNC_IMPL (i, name, wcscpy, IFUNC_IMPL_ADD (array, i, wcscpy, HAS_CPU_FEATURE (SSSE3), diff --git a/sysdeps/x86_64/multiarch/strcmp-avx2.S b/sysdeps/x86_64/multiarch/strcmp-avx2.S new file mode 100644 index 00000000000..513031959d7 --- /dev/null +++ b/sysdeps/x86_64/multiarch/strcmp-avx2.S @@ -0,0 +1,512 @@ +/* strcmp/wcscmp optimized with AVX2. + Copyright (C) 2018 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#if IS_IN (libc) + +# include + +# ifndef STRCMP +# define STRCMP __strcmp_avx2 +# endif + +# define PAGE_SIZE 4096 + +# define VEC_SIZE 32 + +# if VEC_SIZE != 32 +# error Unsupported VEC_SIZE! +# endif + +/* Shift for dividing by (VEC_SIZE * 4). */ +# define DIVIDE_BY_VEC_4_SHIFT 7 +# if (VEC_SIZE * 4) != (1 << DIVIDE_BY_VEC_4_SHIFT) +# error (VEC_SIZE * 4) != (1 << DIVIDE_BY_VEC_4_SHIFT) +# endif + +# ifdef USE_AS_WCSCMP +# define VPCMPEQ vpcmpeqd +# define VPMINU vpminud +# define SIZE_OF_CHAR 4 +# else +# define VPCMPEQ vpcmpeqb +# define VPMINU vpminub +# define SIZE_OF_CHAR 1 +# endif + +# ifndef VZEROUPPER +# define VZEROUPPER vzeroupper +# endif + +/* Warning! + wcscmp has to use SIGNED comparison for elements. + strcmp has to use UNSIGNED comparison for elemnts. +*/ + +/* The main idea of the strcmp comparison using AVX2 consists of + comparing (VPCMPEQ) two ymm vectors. The comparison can be on + either packed bytes or dwords depending on USE_AS_WCSCMP. In + order to check the null char, algorithm keeps the matched + bytes/dwords, requiring two more AVX2 instructions (VPMINU and + VPCMPEQ). In general, the costs of comparing VEC_SIZE bytes + (32-bytes) are two VPCMPEQ and one VPMINU instructions, together + with movdqu and testl instructions. Main loop (away from from + page boundary) compares 4 vectors are a time, effectively comparing + 4 x VEC_SIZE bytes (128 bytes) on each loop. */ + + .section .text.avx,"ax",@progbits +ENTRY (STRCMP) + movl %edi, %eax + xorl %edx, %edx + /* Make %ymm7 all zeros in this function. */ + vpxor %ymm7, %ymm7, %ymm7 + orl %esi, %eax + andl $(PAGE_SIZE - 1), %eax + cmpl $(PAGE_SIZE - (VEC_SIZE * 4)), %eax + jg L(cross_page) + /* Start comparing 4 vectors. */ + vmovdqu (%rdi), %ymm1 + VPCMPEQ (%rsi), %ymm1, %ymm0 + VPMINU %ymm1, %ymm0, %ymm0 + VPCMPEQ %ymm7, %ymm0, %ymm0 + vpmovmskb %ymm0, %ecx + testl %ecx, %ecx + je L(next_3_vectors) + tzcntl %ecx, %edx +# ifdef USE_AS_WCSCMP + xorl %eax, %eax + movl (%rdi, %rdx), %ecx + cmpl (%rsi, %rdx), %ecx + je L(return) +L(wcscmp_return): + setl %al + negl %eax + orl $1, %eax +L(return): +# else + movzbl (%rdi, %rdx), %eax + movzbl (%rsi, %rdx), %edx + subl %edx, %eax +# endif + VZEROUPPER + ret + + .p2align 4 +L(return_vec_size): + tzcntl %ecx, %edx +# ifdef USE_AS_WCSCMP + xorl %eax, %eax + movl VEC_SIZE(%rdi, %rdx), %ecx + cmpl VEC_SIZE(%rsi, %rdx), %ecx + jne L(wcscmp_return) +# else + movzbl VEC_SIZE(%rdi, %rdx), %eax + movzbl VEC_SIZE(%rsi, %rdx), %edx + subl %edx, %eax +# endif + VZEROUPPER + ret + + .p2align 4 +L(return_2_vec_size): + tzcntl %ecx, %edx +# ifdef USE_AS_WCSCMP + xorl %eax, %eax + movl (VEC_SIZE * 2)(%rdi, %rdx), %ecx + cmpl (VEC_SIZE * 2)(%rsi, %rdx), %ecx + jne L(wcscmp_return) +# else + movzbl (VEC_SIZE * 2)(%rdi, %rdx), %eax + movzbl (VEC_SIZE * 2)(%rsi, %rdx), %edx + subl %edx, %eax +# endif + VZEROUPPER + ret + + .p2align 4 +L(return_3_vec_size): + tzcntl %ecx, %edx +# ifdef USE_AS_WCSCMP + xorl %eax, %eax + movl (VEC_SIZE * 3)(%rdi, %rdx), %ecx + cmpl (VEC_SIZE * 3)(%rsi, %rdx), %ecx + jne L(wcscmp_return) +# else + movzbl (VEC_SIZE * 3)(%rdi, %rdx), %eax + movzbl (VEC_SIZE * 3)(%rsi, %rdx), %edx + subl %edx, %eax +# endif + VZEROUPPER + ret + + .p2align 4 +L(next_3_vectors): + vmovdqu VEC_SIZE(%rdi), %ymm6 + VPCMPEQ VEC_SIZE(%rsi), %ymm6, %ymm3 + VPMINU %ymm6, %ymm3, %ymm3 + VPCMPEQ %ymm7, %ymm3, %ymm3 + vpmovmskb %ymm3, %ecx + testl %ecx, %ecx + jne L(return_vec_size) + vmovdqu (VEC_SIZE * 2)(%rdi), %ymm5 + vmovdqu (VEC_SIZE * 3)(%rdi), %ymm4 + vmovdqu (VEC_SIZE * 3)(%rsi), %ymm0 + VPCMPEQ (VEC_SIZE * 2)(%rsi), %ymm5, %ymm2 + VPMINU %ymm5, %ymm2, %ymm2 + VPCMPEQ %ymm4, %ymm0, %ymm0 + VPCMPEQ %ymm7, %ymm2, %ymm2 + vpmovmskb %ymm2, %ecx + testl %ecx, %ecx + jne L(return_2_vec_size) + VPMINU %ymm4, %ymm0, %ymm0 + VPCMPEQ %ymm7, %ymm0, %ymm0 + vpmovmskb %ymm0, %ecx + testl %ecx, %ecx + jne L(return_3_vec_size) +L(main_loop_header): + leaq (VEC_SIZE * 4)(%rdi), %rdx + movl $PAGE_SIZE, %ecx + /* Align load via RAX. */ + andq $-(VEC_SIZE * 4), %rdx + subq %rdi, %rdx + leaq (%rdi, %rdx), %rax + addq %rsi, %rdx + movq %rdx, %rsi + andl $(PAGE_SIZE - 1), %esi + /* Number of bytes before page crossing. */ + subq %rsi, %rcx + /* Number of VEC_SIZE * 4 blocks before page crossing. */ + shrq $DIVIDE_BY_VEC_4_SHIFT, %rcx + /* ESI: Number of VEC_SIZE * 4 blocks before page crossing. */ + movl %ecx, %esi + jmp L(loop_start) + + .p2align 4 +L(loop): + addq $(VEC_SIZE * 4), %rax + addq $(VEC_SIZE * 4), %rdx +L(loop_start): + testl %esi, %esi + leal -1(%esi), %esi + je L(loop_cross_page) +L(back_to_loop): + /* Main loop, comparing 4 vectors are a time. */ + vmovdqa (%rax), %ymm0 + vmovdqa VEC_SIZE(%rax), %ymm3 + VPCMPEQ (%rdx), %ymm0, %ymm4 + VPCMPEQ VEC_SIZE(%rdx), %ymm3, %ymm1 + VPMINU %ymm0, %ymm4, %ymm4 + VPMINU %ymm3, %ymm1, %ymm1 + vmovdqa (VEC_SIZE * 2)(%rax), %ymm2 + VPMINU %ymm1, %ymm4, %ymm0 + vmovdqa (VEC_SIZE * 3)(%rax), %ymm3 + VPCMPEQ (VEC_SIZE * 2)(%rdx), %ymm2, %ymm5 + VPCMPEQ (VEC_SIZE * 3)(%rdx), %ymm3, %ymm6 + VPMINU %ymm2, %ymm5, %ymm5 + VPMINU %ymm3, %ymm6, %ymm6 + VPMINU %ymm5, %ymm0, %ymm0 + VPMINU %ymm6, %ymm0, %ymm0 + VPCMPEQ %ymm7, %ymm0, %ymm0 + + /* Test each mask (32 bits) individually because for VEC_SIZE + == 32 is not possible to OR the four masks and keep all bits + in a 64-bit integer register, differing from SSE2 strcmp + where ORing is possible. */ + vpmovmskb %ymm0, %ecx + testl %ecx, %ecx + je L(loop) + VPCMPEQ %ymm7, %ymm4, %ymm0 + vpmovmskb %ymm0, %edi + testl %edi, %edi + je L(test_vec) + tzcntl %edi, %ecx +# ifdef USE_AS_WCSCMP + movq %rax, %rsi + xorl %eax, %eax + movl (%rsi, %rcx), %edi + cmpl (%rdx, %rcx), %edi + jne L(wcscmp_return) +# else + movzbl (%rax, %rcx), %eax + movzbl (%rdx, %rcx), %edx + subl %edx, %eax +# endif + VZEROUPPER + ret + + .p2align 4 +L(test_vec): + VPCMPEQ %ymm7, %ymm1, %ymm1 + vpmovmskb %ymm1, %ecx + testl %ecx, %ecx + je L(test_2_vec) + tzcntl %ecx, %edi +# ifdef USE_AS_WCSCMP + movq %rax, %rsi + xorl %eax, %eax + movl VEC_SIZE(%rsi, %rdi), %ecx + cmpl VEC_SIZE(%rdx, %rdi), %ecx + jne L(wcscmp_return) +# else + movzbl VEC_SIZE(%rax, %rdi), %eax + movzbl VEC_SIZE(%rdx, %rdi), %edx + subl %edx, %eax +# endif + VZEROUPPER + ret + + .p2align 4 +L(test_2_vec): + VPCMPEQ %ymm7, %ymm5, %ymm5 + vpmovmskb %ymm5, %ecx + testl %ecx, %ecx + je L(test_3_vec) + tzcntl %ecx, %edi +# ifdef USE_AS_WCSCMP + movq %rax, %rsi + xorl %eax, %eax + movl (VEC_SIZE * 2)(%rsi, %rdi), %ecx + cmpl (VEC_SIZE * 2)(%rdx, %rdi), %ecx + jne L(wcscmp_return) +# else + movzbl (VEC_SIZE * 2)(%rax, %rdi), %eax + movzbl (VEC_SIZE * 2)(%rdx, %rdi), %edx + subl %edx, %eax +# endif + VZEROUPPER + ret + + .p2align 4 +L(test_3_vec): + VPCMPEQ %ymm7, %ymm6, %ymm6 + vpmovmskb %ymm6, %esi + tzcntl %esi, %ecx +# ifdef USE_AS_WCSCMP + movq %rax, %rsi + xorl %eax, %eax + movl (VEC_SIZE * 3)(%rsi, %rcx), %esi + cmpl (VEC_SIZE * 3)(%rdx, %rcx), %esi + jne L(wcscmp_return) +# else + movzbl (VEC_SIZE * 3)(%rax, %rcx), %eax + movzbl (VEC_SIZE * 3)(%rdx, %rcx), %edx + subl %edx, %eax +# endif + VZEROUPPER + ret + + .p2align 4 +L(loop_cross_page): + xorl %r10d, %r10d + movq %rdx, %rcx + /* Align load via RDX. We load the extra ECX bytes which should + be ignored. */ + andl $((VEC_SIZE * 4) - 1), %ecx + /* R10 is -RCX. */ + subq %rcx, %r10 + + /* This works only if VEC_SIZE * 2 == 64. */ +# if (VEC_SIZE * 2) != 64 +# error (VEC_SIZE * 2) != 64 +# endif + + /* Check if the first VEC_SIZE * 2 bytes should be ignored. */ + cmpl $(VEC_SIZE * 2), %ecx + jge L(loop_cross_page_2_vec) + + vmovdqu (%rax, %r10), %ymm2 + vmovdqu VEC_SIZE(%rax, %r10), %ymm3 + VPCMPEQ (%rdx, %r10), %ymm2, %ymm0 + VPCMPEQ VEC_SIZE(%rdx, %r10), %ymm3, %ymm1 + VPMINU %ymm2, %ymm0, %ymm0 + VPMINU %ymm3, %ymm1, %ymm1 + VPCMPEQ %ymm7, %ymm0, %ymm0 + VPCMPEQ %ymm7, %ymm1, %ymm1 + + vpmovmskb %ymm0, %edi + vpmovmskb %ymm1, %esi + + salq $32, %rsi + xorq %rsi, %rdi + + /* Since ECX < VEC_SIZE * 2, simply skip the first ECX bytes. */ + shrq %cl, %rdi + + testq %rdi, %rdi + je L(loop_cross_page_2_vec) + tzcntq %rdi, %rcx +# ifdef USE_AS_WCSCMP + movq %rax, %rsi + xorl %eax, %eax + movl (%rsi, %rcx), %edi + cmpl (%rdx, %rcx), %edi + jne L(wcscmp_return) +# else + movzbl (%rax, %rcx), %eax + movzbl (%rdx, %rcx), %edx + subl %edx, %eax +# endif + VZEROUPPER + ret + + .p2align 4 +L(loop_cross_page_2_vec): + /* The first VEC_SIZE * 2 bytes match or are ignored. */ + vmovdqu (VEC_SIZE * 2)(%rax, %r10), %ymm2 + vmovdqu (VEC_SIZE * 3)(%rax, %r10), %ymm3 + VPCMPEQ (VEC_SIZE * 2)(%rdx, %r10), %ymm2, %ymm5 + VPMINU %ymm2, %ymm5, %ymm5 + VPCMPEQ (VEC_SIZE * 3)(%rdx, %r10), %ymm3, %ymm6 + VPCMPEQ %ymm7, %ymm5, %ymm5 + VPMINU %ymm3, %ymm6, %ymm6 + VPCMPEQ %ymm7, %ymm6, %ymm6 + + vpmovmskb %ymm5, %edi + vpmovmskb %ymm6, %esi + + salq $32, %rsi + xorq %rsi, %rdi + + xorl %r8d, %r8d + /* If ECX > VEC_SIZE * 2, skip ECX - (VEC_SIZE * 2) bytes. */ + subl $(VEC_SIZE * 2), %ecx + jle 1f + /* Skip ECX bytes. */ + shrq %cl, %rdi + /* R8 has number of bytes skipped. */ + movl %ecx, %r8d +1: + /* Before jumping back to the loop, set ESI to the number of + VEC_SIZE * 4 blocks before page crossing. */ + movl $(PAGE_SIZE / (VEC_SIZE * 4) - 1), %esi + + testq %rdi, %rdi + je L(back_to_loop) + tzcntq %rdi, %rcx + addq %r10, %rcx + /* Adjust for number of bytes skipped. */ + addq %r8, %rcx +# ifdef USE_AS_WCSCMP + movq %rax, %rsi + xorl %eax, %eax + movl (VEC_SIZE * 2)(%rsi, %rcx), %edi + cmpl (VEC_SIZE * 2)(%rdx, %rcx), %edi + jne L(wcscmp_return) +# else + movzbl (VEC_SIZE * 2)(%rax, %rcx), %eax + movzbl (VEC_SIZE * 2)(%rdx, %rcx), %edx + subl %edx, %eax +# endif + VZEROUPPER + ret + + .p2align 4 +L(cross_page_loop): + /* Check one byte/dword at a time. */ + cmpl %ecx, %eax + jne L(different) + addl $SIZE_OF_CHAR, %edx + cmpl $(VEC_SIZE * 4), %edx + je L(main_loop_header) +# ifdef USE_AS_WCSCMP + movl (%rdi, %rdx), %eax + movl (%rsi, %rdx), %ecx +# else + movzbl (%rdi, %rdx), %eax + movzbl (%rsi, %rdx), %ecx +# endif + testl %eax, %eax + jne L(cross_page_loop) + xorl %eax, %eax +L(different): + subl %ecx, %eax + VZEROUPPER + ret + + .p2align 4 +L(last_vector): + addq %rdx, %rdi + addq %rdx, %rsi + tzcntl %ecx, %edx +# ifdef USE_AS_WCSCMP + xorl %eax, %eax + movl (%rdi, %rdx), %ecx + cmpl (%rsi, %rdx), %ecx + jne L(wcscmp_return) +# else + movzbl (%rdi, %rdx), %eax + movzbl (%rsi, %rdx), %edx + subl %edx, %eax +# endif + VZEROUPPER + ret + + /* Comparing on page boundary region requires special treatment: + It must done one vector at the time, starting with the wider + ymm vector if possible, if not, with xmm. If fetching 16 bytes + (xmm) still passes the boundary, byte comparison must be done. + */ + .p2align 4 +L(cross_page): + /* Try one ymm vector at a time. */ + cmpl $(PAGE_SIZE - VEC_SIZE), %eax + jg L(cross_page_1_vector) +L(loop_1_vector): + vmovdqu (%rdi, %rdx), %ymm1 + VPCMPEQ (%rsi, %rdx), %ymm1, %ymm0 + VPMINU %ymm1, %ymm0, %ymm0 + VPCMPEQ %ymm7, %ymm0, %ymm0 + vpmovmskb %ymm0, %ecx + testl %ecx, %ecx + jne L(last_vector) + + addl $VEC_SIZE, %edx + + addl $VEC_SIZE, %eax + cmpl $(PAGE_SIZE - VEC_SIZE), %eax + jle L(loop_1_vector) +L(cross_page_1_vector): + /* Less than 32 bytes to check, try one xmm vector. */ + cmpl $(PAGE_SIZE - 16), %eax + jg L(cross_page_1_xmm) + vmovdqu (%rdi, %rdx), %xmm1 + VPCMPEQ (%rsi, %rdx), %xmm1, %xmm0 + VPMINU %xmm1, %xmm0, %xmm0 + VPCMPEQ %xmm7, %xmm0, %xmm0 + vpmovmskb %xmm0, %ecx + testl %ecx, %ecx + jne L(last_vector) + + addl $16, %edx + +L(cross_page_1_xmm): + /* Less than 16 bytes to check, try one byte/dword at a time. */ +# ifdef USE_AS_WCSCMP + movl (%rdi, %rdx), %eax + movl (%rsi, %rdx), %ecx +# else + movzbl (%rdi, %rdx), %eax + movzbl (%rsi, %rdx), %ecx +# endif + testl %eax, %eax + jne L(cross_page_loop) + xorl %eax, %eax + subl %ecx, %eax + VZEROUPPER + ret +END (STRCMP) +#endif diff --git a/sysdeps/x86_64/multiarch/strcmp.c b/sysdeps/x86_64/multiarch/strcmp.c index 0335f96b090..b903e418df1 100644 --- a/sysdeps/x86_64/multiarch/strcmp.c +++ b/sysdeps/x86_64/multiarch/strcmp.c @@ -29,12 +29,18 @@ extern __typeof (REDIRECT_NAME) OPTIMIZE (sse2) attribute_hidden; extern __typeof (REDIRECT_NAME) OPTIMIZE (sse2_unaligned) attribute_hidden; extern __typeof (REDIRECT_NAME) OPTIMIZE (ssse3) attribute_hidden; +extern __typeof (REDIRECT_NAME) OPTIMIZE (avx2) attribute_hidden; static inline void * IFUNC_SELECTOR (void) { const struct cpu_features* cpu_features = __get_cpu_features (); + if (!CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER) + && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable) + && CPU_FEATURES_ARCH_P (cpu_features, AVX_Fast_Unaligned_Load)) + return OPTIMIZE (avx2); + if (CPU_FEATURES_ARCH_P (cpu_features, Fast_Unaligned_Load)) return OPTIMIZE (sse2_unaligned); diff --git a/sysdeps/x86_64/multiarch/wcscmp-avx2.S b/sysdeps/x86_64/multiarch/wcscmp-avx2.S new file mode 100644 index 00000000000..e5da4da689d --- /dev/null +++ b/sysdeps/x86_64/multiarch/wcscmp-avx2.S @@ -0,0 +1,4 @@ +#define STRCMP __wcscmp_avx2 +#define USE_AS_WCSCMP 1 + +#include "strcmp-avx2.S" diff --git a/sysdeps/x86_64/multiarch/wcscmp-sse2.S b/sysdeps/x86_64/multiarch/wcscmp-sse2.S new file mode 100644 index 00000000000..b129d1c073c --- /dev/null +++ b/sysdeps/x86_64/multiarch/wcscmp-sse2.S @@ -0,0 +1,23 @@ +/* wcscmp optimized with SSE2. + Copyright (C) 2018 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#if IS_IN (libc) +# define __wcscmp __wcscmp_sse2 +#endif + +#include "../wcscmp.S" diff --git a/sysdeps/x86_64/multiarch/wcscmp.c b/sysdeps/x86_64/multiarch/wcscmp.c new file mode 100644 index 00000000000..74d92cf0f96 --- /dev/null +++ b/sysdeps/x86_64/multiarch/wcscmp.c @@ -0,0 +1,37 @@ +/* Multiple versions of wcscmp. + Copyright (C) 2017-2018 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +/* Define multiple versions only for the definition in libc. */ +#if IS_IN (libc) +# define wcscmp __redirect_wcscmp +# define __wcscmp __redirect___wcscmp +# include +# undef wcscmp +# undef __wcscmp + +# define SYMBOL_NAME wcscmp +# include "ifunc-avx2.h" + +libc_ifunc_redirected (__redirect_wcscmp, __wcscmp, IFUNC_SELECTOR ()); +weak_alias (__wcscmp, wcscmp) + +# ifdef SHARED +__hidden_ver1 (__wcscmp, __GI___wcscmp, __redirect_wcscmp) + __attribute__ ((visibility ("hidden"))); +# endif +#endif diff --git a/sysdeps/x86_64/wcscmp.S b/sysdeps/x86_64/wcscmp.S index 1b9f81f54ce..0d506c8b5cd 100644 --- a/sysdeps/x86_64/wcscmp.S +++ b/sysdeps/x86_64/wcscmp.S @@ -946,5 +946,7 @@ L(equal): ret END (__wcscmp) +#ifndef __wcscmp libc_hidden_def (__wcscmp) weak_alias (__wcscmp, wcscmp) +#endif