From patchwork Mon Sep 13 23:05:07 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 44964 X-Patchwork-Delegate: carlos@redhat.com Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 652B53858013 for ; Mon, 13 Sep 2021 23:22:57 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 652B53858013 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1631575377; bh=kdvQOjsgXLyf0hgfOQw6tpKx/23I+8dDjwajFRnbWe4=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=XZuKbg3ZSMbUZUqN3Pnm+jcnv9/+jbfV20LiA63xBke0/lxs2ZURKh9pjHH2aXOpC 37qNPhq+xct9NSyELJR6z7D1b7JgXPZy/+pvSVRA1t+UJykIHtpwaTMbyxHtBy/V5y +Or/MGk/r3D9h/YN73lwK7m/BR5JSRz7bohfofec= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-il1-x133.google.com (mail-il1-x133.google.com [IPv6:2607:f8b0:4864:20::133]) by sourceware.org (Postfix) with ESMTPS id 31C313858413 for ; Mon, 13 Sep 2021 23:21:06 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 31C313858413 Received: by mail-il1-x133.google.com with SMTP id a1so11921922ilj.6 for ; Mon, 13 Sep 2021 16:21:06 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kdvQOjsgXLyf0hgfOQw6tpKx/23I+8dDjwajFRnbWe4=; b=mho6+PVWoQFF6Tlb5JP5qpEdGVRkRc89xxQIL/bWw3LkVPhzyTKwEgOREMjUMC2djO B4xB5PlHAL8Q5NmTHA/TYKQcfcdbVrIA5r6ZTJaA/sR9CCbPa+PvGJ2/YI/0GmL932bX S9NjPpGNpB7KFu68tDn6UZej02lqGUKEq1Um5+VUPVMLSUVWizWkjr150aII9gTW3A7+ S8omwvFvs2vAsROOyppCigK6vu/vdBRW2B9n3vcPtXphpvp1GIz1xyM+ZpYYJfG2HPJP xqn97h0jxAus7j9yNqjz6vNEEe0QIFM6nydgoty2V4nmXgysO0vkBJoDZ/FW4rxW0+O/ nlUg== X-Gm-Message-State: AOAM5330XiLlR0We8ueh6RZRVX8HOdSDmBEZM/GgkA3aFfqune0YiHpp x8hISKluSFdbXnL+m71qtluyyabtJZU= X-Google-Smtp-Source: ABdhPJySk2WqBbQnpZTBImf65MTMS/WjQ9fHG34RzPv+mmWan4qxe4NDhvLej6HgU8iajoZLtNJ6Zw== X-Received: by 2002:a05:6e02:144c:: with SMTP id p12mr4659334ilo.292.1631575265381; Mon, 13 Sep 2021 16:21:05 -0700 (PDT) Received: from localhost.localdomain (mobile-130-126-255-38.near.illinois.edu. [130.126.255.38]) by smtp.googlemail.com with ESMTPSA id s5sm5508857iol.33.2021.09.13.16.21.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 13 Sep 2021 16:21:05 -0700 (PDT) To: libc-alpha@sourceware.org Subject: [PATCH 4/5] x86_64: Add avx2 optimized bcmp implementation in bcmp-avx2.S Date: Mon, 13 Sep 2021 18:05:07 -0500 Message-Id: <20210913230506.546749-4-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210913230506.546749-1-goldstein.w.n@gmail.com> References: <20210913230506.546749-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-11.5 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, KAM_SHORT, KAM_STOCKGEN, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" No bug. This commit adds new optimized bcmp implementation for avx2. The primary optimizations are 1) skipping the logic to find the difference of the first mismatched byte and 2) not updating src/dst addresses as the non-equals logic does not need to be reused by different areas. The entry alignment has been fixed at 64. In throughput sensitive functions which bcmp can potentially be frontend loop performance is important to opimized for. This is impossible/difficult to do/maintain with only 16 byte fixed alignment. test-memcmp, test-bcmp, and test-wmemcmp are all passing. --- sysdeps/x86/sysdep.h | 6 +- sysdeps/x86_64/multiarch/bcmp-avx2-rtm.S | 4 +- sysdeps/x86_64/multiarch/bcmp-avx2.S | 304 ++++++++++++++++++++- sysdeps/x86_64/multiarch/ifunc-bcmp.h | 4 +- sysdeps/x86_64/multiarch/ifunc-impl-list.c | 2 - 5 files changed, 308 insertions(+), 12 deletions(-) diff --git a/sysdeps/x86/sysdep.h b/sysdeps/x86/sysdep.h index cac1d762fb..4895179c10 100644 --- a/sysdeps/x86/sysdep.h +++ b/sysdeps/x86/sysdep.h @@ -78,15 +78,17 @@ enum cf_protection_level #define ASM_SIZE_DIRECTIVE(name) .size name,.-name; /* Define an entry point visible from C. */ -#define ENTRY(name) \ +#define ENTRY_P2ALIGN(name, alignment) \ .globl C_SYMBOL_NAME(name); \ .type C_SYMBOL_NAME(name),@function; \ - .align ALIGNARG(4); \ + .align ALIGNARG(alignment); \ C_LABEL(name) \ cfi_startproc; \ _CET_ENDBR; \ CALL_MCOUNT +#define ENTRY(name) ENTRY_P2ALIGN (name, 4) + #undef END #define END(name) \ cfi_endproc; \ diff --git a/sysdeps/x86_64/multiarch/bcmp-avx2-rtm.S b/sysdeps/x86_64/multiarch/bcmp-avx2-rtm.S index d742257e4e..28976daff0 100644 --- a/sysdeps/x86_64/multiarch/bcmp-avx2-rtm.S +++ b/sysdeps/x86_64/multiarch/bcmp-avx2-rtm.S @@ -1,5 +1,5 @@ -#ifndef MEMCMP -# define MEMCMP __bcmp_avx2_rtm +#ifndef BCMP +# define BCMP __bcmp_avx2_rtm #endif #define ZERO_UPPER_VEC_REGISTERS_RETURN \ diff --git a/sysdeps/x86_64/multiarch/bcmp-avx2.S b/sysdeps/x86_64/multiarch/bcmp-avx2.S index 93a9a20b17..eb77ae5c4a 100644 --- a/sysdeps/x86_64/multiarch/bcmp-avx2.S +++ b/sysdeps/x86_64/multiarch/bcmp-avx2.S @@ -16,8 +16,304 @@ License along with the GNU C Library; if not, see . */ -#ifndef MEMCMP -# define MEMCMP __bcmp_avx2 -#endif +#if IS_IN (libc) + +/* bcmp is implemented as: + 1. Use ymm vector compares when possible. The only case where + vector compares is not possible for when size < VEC_SIZE + and loading from either s1 or s2 would cause a page cross. + 2. Use xmm vector compare when size >= 8 bytes. + 3. Optimistically compare up to first 4 * VEC_SIZE one at a + to check for early mismatches. Only do this if its guranteed the + work is not wasted. + 4. If size is 8 * VEC_SIZE or less, unroll the loop. + 5. Compare 4 * VEC_SIZE at a time with the aligned first memory + area. + 6. Use 2 vector compares when size is 2 * VEC_SIZE or less. + 7. Use 4 vector compares when size is 4 * VEC_SIZE or less. + 8. Use 8 vector compares when size is 8 * VEC_SIZE or less. */ + +# include + +# ifndef BCMP +# define BCMP __bcmp_avx2 +# endif + +# define VPCMPEQ vpcmpeqb + +# ifndef VZEROUPPER +# define VZEROUPPER vzeroupper +# endif + +# ifndef SECTION +# define SECTION(p) p##.avx +# endif + +# define VEC_SIZE 32 +# define PAGE_SIZE 4096 + + .section SECTION(.text), "ax", @progbits +ENTRY_P2ALIGN (BCMP, 6) +# ifdef __ILP32__ + /* Clear the upper 32 bits. */ + movl %edx, %edx +# endif + cmp $VEC_SIZE, %RDX_LP + jb L(less_vec) + + /* From VEC to 2 * VEC. No branch when size == VEC_SIZE. */ + vmovdqu (%rsi), %ymm1 + VPCMPEQ (%rdi), %ymm1, %ymm1 + vpmovmskb %ymm1, %eax + incl %eax + jnz L(return_neq0) + cmpq $(VEC_SIZE * 2), %rdx + jbe L(last_1x_vec) + + /* Check second VEC no matter what. */ + vmovdqu VEC_SIZE(%rsi), %ymm2 + VPCMPEQ VEC_SIZE(%rdi), %ymm2, %ymm2 + vpmovmskb %ymm2, %eax + /* If all 4 VEC where equal eax will be all 1s so incl will overflow + and set zero flag. */ + incl %eax + jnz L(return_neq0) + + /* Less than 4 * VEC. */ + cmpq $(VEC_SIZE * 4), %rdx + jbe L(last_2x_vec) + + /* Check third and fourth VEC no matter what. */ + vmovdqu (VEC_SIZE * 2)(%rsi), %ymm3 + VPCMPEQ (VEC_SIZE * 2)(%rdi), %ymm3, %ymm3 + vpmovmskb %ymm3, %eax + incl %eax + jnz L(return_neq0) + + vmovdqu (VEC_SIZE * 3)(%rsi), %ymm4 + VPCMPEQ (VEC_SIZE * 3)(%rdi), %ymm4, %ymm4 + vpmovmskb %ymm4, %eax + incl %eax + jnz L(return_neq0) + + /* Go to 4x VEC loop. */ + cmpq $(VEC_SIZE * 8), %rdx + ja L(more_8x_vec) + + /* Handle remainder of size = 4 * VEC + 1 to 8 * VEC without any + branches. */ + + /* Adjust rsi and rdi to avoid indexed address mode. This end up + saving a 16 bytes of code, prevents unlamination, and bottlenecks in + the AGU. */ + addq %rdx, %rsi + vmovdqu -(VEC_SIZE * 4)(%rsi), %ymm1 + vmovdqu -(VEC_SIZE * 3)(%rsi), %ymm2 + addq %rdx, %rdi + + VPCMPEQ -(VEC_SIZE * 4)(%rdi), %ymm1, %ymm1 + VPCMPEQ -(VEC_SIZE * 3)(%rdi), %ymm2, %ymm2 + + vmovdqu -(VEC_SIZE * 2)(%rsi), %ymm3 + VPCMPEQ -(VEC_SIZE * 2)(%rdi), %ymm3, %ymm3 + vmovdqu -VEC_SIZE(%rsi), %ymm4 + VPCMPEQ -VEC_SIZE(%rdi), %ymm4, %ymm4 -#include "bcmp-avx2.S" + /* Reduce VEC0 - VEC4. */ + vpand %ymm1, %ymm2, %ymm2 + vpand %ymm3, %ymm4, %ymm4 + vpand %ymm2, %ymm4, %ymm4 + vpmovmskb %ymm4, %eax + incl %eax +L(return_neq0): +L(return_vzeroupper): + ZERO_UPPER_VEC_REGISTERS_RETURN + + /* NB: p2align 5 here will ensure the L(loop_4x_vec) is also 32 byte + aligned. */ + .p2align 5 +L(less_vec): + /* Check if one or less char. This is necessary for size = 0 but is + also faster for size = 1. */ + cmpl $1, %edx + jbe L(one_or_less) + + /* Check if loading one VEC from either s1 or s2 could cause a page + cross. This can have false positives but is by far the fastest + method. */ + movl %edi, %eax + orl %esi, %eax + andl $(PAGE_SIZE - 1), %eax + cmpl $(PAGE_SIZE - VEC_SIZE), %eax + jg L(page_cross_less_vec) + + /* No page cross possible. */ + vmovdqu (%rsi), %ymm2 + VPCMPEQ (%rdi), %ymm2, %ymm2 + vpmovmskb %ymm2, %eax + incl %eax + /* Result will be zero if s1 and s2 match. Otherwise first set bit + will be first mismatch. */ + bzhil %edx, %eax, %eax + VZEROUPPER_RETURN + + /* Relatively cold but placing close to L(less_vec) for 2 byte jump + encoding. */ + .p2align 4 +L(one_or_less): + jb L(zero) + movzbl (%rsi), %ecx + movzbl (%rdi), %eax + subl %ecx, %eax + /* No ymm register was touched. */ + ret + /* Within the same 16 byte block is L(one_or_less). */ +L(zero): + xorl %eax, %eax + ret + + .p2align 4 +L(last_1x_vec): + vmovdqu -(VEC_SIZE * 1)(%rsi, %rdx), %ymm1 + VPCMPEQ -(VEC_SIZE * 1)(%rdi, %rdx), %ymm1, %ymm1 + vpmovmskb %ymm1, %eax + incl %eax + VZEROUPPER_RETURN + + .p2align 4 +L(last_2x_vec): + vmovdqu -(VEC_SIZE * 2)(%rsi, %rdx), %ymm1 + VPCMPEQ -(VEC_SIZE * 2)(%rdi, %rdx), %ymm1, %ymm1 + vmovdqu -(VEC_SIZE * 1)(%rsi, %rdx), %ymm2 + VPCMPEQ -(VEC_SIZE * 1)(%rdi, %rdx), %ymm2, %ymm2 + vpand %ymm1, %ymm2, %ymm2 + vpmovmskb %ymm2, %eax + incl %eax + VZEROUPPER_RETURN + + .p2align 4 +L(more_8x_vec): + /* Set end of s1 in rdx. */ + leaq -(VEC_SIZE * 4)(%rdi, %rdx), %rdx + /* rsi stores s2 - s1. This allows loop to only update one pointer. + */ + subq %rdi, %rsi + /* Align s1 pointer. */ + andq $-VEC_SIZE, %rdi + /* Adjust because first 4x vec where check already. */ + subq $-(VEC_SIZE * 4), %rdi + .p2align 4 +L(loop_4x_vec): + /* rsi has s2 - s1 so get correct address by adding s1 (in rdi). */ + vmovdqu (%rsi, %rdi), %ymm1 + VPCMPEQ (%rdi), %ymm1, %ymm1 + + vmovdqu VEC_SIZE(%rsi, %rdi), %ymm2 + VPCMPEQ VEC_SIZE(%rdi), %ymm2, %ymm2 + + vmovdqu (VEC_SIZE * 2)(%rsi, %rdi), %ymm3 + VPCMPEQ (VEC_SIZE * 2)(%rdi), %ymm3, %ymm3 + + vmovdqu (VEC_SIZE * 3)(%rsi, %rdi), %ymm4 + VPCMPEQ (VEC_SIZE * 3)(%rdi), %ymm4, %ymm4 + + vpand %ymm1, %ymm2, %ymm2 + vpand %ymm3, %ymm4, %ymm4 + vpand %ymm2, %ymm4, %ymm4 + vpmovmskb %ymm4, %eax + incl %eax + jnz L(return_neq1) + subq $-(VEC_SIZE * 4), %rdi + /* Check if s1 pointer at end. */ + cmpq %rdx, %rdi + jb L(loop_4x_vec) + + vmovdqu (VEC_SIZE * 3)(%rsi, %rdx), %ymm4 + VPCMPEQ (VEC_SIZE * 3)(%rdx), %ymm4, %ymm4 + subq %rdx, %rdi + /* rdi has 4 * VEC_SIZE - remaining length. */ + cmpl $(VEC_SIZE * 3), %edi + jae L(8x_last_1x_vec) + /* Load regardless of branch. */ + vmovdqu (VEC_SIZE * 2)(%rsi, %rdx), %ymm3 + VPCMPEQ (VEC_SIZE * 2)(%rdx), %ymm3, %ymm3 + cmpl $(VEC_SIZE * 2), %edi + jae L(8x_last_2x_vec) + /* Check last 4 VEC. */ + vmovdqu VEC_SIZE(%rsi, %rdx), %ymm1 + VPCMPEQ VEC_SIZE(%rdx), %ymm1, %ymm1 + + vmovdqu (%rsi, %rdx), %ymm2 + VPCMPEQ (%rdx), %ymm2, %ymm2 + + vpand %ymm3, %ymm4, %ymm4 + vpand %ymm1, %ymm2, %ymm3 +L(8x_last_2x_vec): + vpand %ymm3, %ymm4, %ymm4 +L(8x_last_1x_vec): + vpmovmskb %ymm4, %eax + /* Restore s1 pointer to rdi. */ + incl %eax +L(return_neq1): + VZEROUPPER_RETURN + + /* Relatively cold case as page cross are unexpected. */ + .p2align 4 +L(page_cross_less_vec): + cmpl $16, %edx + jae L(between_16_31) + cmpl $8, %edx + ja L(between_9_15) + cmpl $4, %edx + jb L(between_2_3) + /* From 4 to 8 bytes. No branch when size == 4. */ + movl (%rdi), %eax + movl (%rsi), %ecx + subl %ecx, %eax + movl -4(%rdi, %rdx), %ecx + movl -4(%rsi, %rdx), %esi + subl %esi, %ecx + orl %ecx, %eax + ret + + .p2align 4,, 8 +L(between_9_15): + vmovq (%rdi), %xmm1 + vmovq (%rsi), %xmm2 + VPCMPEQ %xmm1, %xmm2, %xmm3 + vmovq -8(%rdi, %rdx), %xmm1 + vmovq -8(%rsi, %rdx), %xmm2 + VPCMPEQ %xmm1, %xmm2, %xmm2 + vpand %xmm2, %xmm3, %xmm3 + vpmovmskb %xmm3, %eax + subl $0xffff, %eax + /* No ymm register was touched. */ + ret + + .p2align 4,, 8 +L(between_16_31): + /* From 16 to 31 bytes. No branch when size == 16. */ + vmovdqu (%rsi), %xmm1 + VPCMPEQ (%rdi), %xmm1, %xmm1 + vmovdqu -16(%rsi, %rdx), %xmm2 + VPCMPEQ -16(%rdi, %rdx), %xmm2, %xmm2 + vpand %xmm1, %xmm2, %xmm2 + vpmovmskb %xmm2, %eax + subl $0xffff, %eax + /* No ymm register was touched. */ + ret + + .p2align 4,, 8 +L(between_2_3): + /* From 2 to 3 bytes. No branch when size == 2. */ + movzwl (%rdi), %eax + movzwl (%rsi), %ecx + subl %ecx, %eax + movzbl -1(%rdi, %rdx), %edi + movzbl -1(%rsi, %rdx), %esi + subl %edi, %esi + orl %esi, %eax + /* No ymm register was touched. */ + ret +END (BCMP) +#endif diff --git a/sysdeps/x86_64/multiarch/ifunc-bcmp.h b/sysdeps/x86_64/multiarch/ifunc-bcmp.h index b0dacd8526..f94516e5ee 100644 --- a/sysdeps/x86_64/multiarch/ifunc-bcmp.h +++ b/sysdeps/x86_64/multiarch/ifunc-bcmp.h @@ -32,11 +32,11 @@ IFUNC_SELECTOR (void) if (CPU_FEATURE_USABLE_P (cpu_features, AVX2) && CPU_FEATURE_USABLE_P (cpu_features, BMI2) - && CPU_FEATURE_USABLE_P (cpu_features, MOVBE) && CPU_FEATURES_ARCH_P (cpu_features, AVX_Fast_Unaligned_Load)) { if (CPU_FEATURE_USABLE_P (cpu_features, AVX512VL) - && CPU_FEATURE_USABLE_P (cpu_features, AVX512BW)) + && CPU_FEATURE_USABLE_P (cpu_features, AVX512BW) + && CPU_FEATURE_USABLE_P (cpu_features, MOVBE)) return OPTIMIZE (evex); if (CPU_FEATURE_USABLE_P (cpu_features, RTM)) diff --git a/sysdeps/x86_64/multiarch/ifunc-impl-list.c b/sysdeps/x86_64/multiarch/ifunc-impl-list.c index dd0c393c7d..cda0316928 100644 --- a/sysdeps/x86_64/multiarch/ifunc-impl-list.c +++ b/sysdeps/x86_64/multiarch/ifunc-impl-list.c @@ -42,13 +42,11 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL (i, name, bcmp, IFUNC_IMPL_ADD (array, i, bcmp, (CPU_FEATURE_USABLE (AVX2) - && CPU_FEATURE_USABLE (MOVBE) && CPU_FEATURE_USABLE (BMI2)), __bcmp_avx2) IFUNC_IMPL_ADD (array, i, bcmp, (CPU_FEATURE_USABLE (AVX2) && CPU_FEATURE_USABLE (BMI2) - && CPU_FEATURE_USABLE (MOVBE) && CPU_FEATURE_USABLE (RTM)), __bcmp_avx2_rtm) IFUNC_IMPL_ADD (array, i, bcmp,