From patchwork Wed Oct 27 02:43:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noah Goldstein X-Patchwork-Id: 46682 Return-Path: X-Original-To: patchwork@sourceware.org Delivered-To: patchwork@sourceware.org Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 5D7613858006 for ; Wed, 27 Oct 2021 02:46:02 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 5D7613858006 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1635302762; bh=g44t0ySGbRNQiFmo5RH4AGqTMcQghlV3/ErOYxXxPzk=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=BkOK29rFwiQjWVzDgJC6rGcgVnIqUTId4m19answ/pi7GZoab1oHMNBgBVV0qw6Qr tcCkE7zrjrFDP+p3+hC+tl8MllIcHg0CAM3qcrqGGieI90EB7FsfWJI7gDBYTIazt+ //ioiBCpWyO3SWmhEl21kIqTzssyqi5hCuh+ErIg= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-io1-xd2d.google.com (mail-io1-xd2d.google.com [IPv6:2607:f8b0:4864:20::d2d]) by sourceware.org (Postfix) with ESMTPS id 55D283857C5B for ; Wed, 27 Oct 2021 02:43:38 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 55D283857C5B Received: by mail-io1-xd2d.google.com with SMTP id z144so714549iof.0 for ; Tue, 26 Oct 2021 19:43:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=g44t0ySGbRNQiFmo5RH4AGqTMcQghlV3/ErOYxXxPzk=; b=YYVcm5PFn2TnX4SBvGYbwxkh43QV3gsQRPZkiS8wAJtKCPhwguYyEx9j+ITl599x9w xUS9Ix73QsiTXx+7P+iE8SG8/oTNSnW5n9+qjTLQZ12bYO434LXXeVj2yhfxA1IoFAz8 2QfTg1R/0Hgt7YjFOIr4CstLLTuiCrcYIrnd/BzomSpxaThXjch7MMGfqgT1Xb0uLtMK Sb9zlObCKdOpwiEajZD0ckjd7VmbdwfvjHAEbqdUtgRuC2UobnBafLvfD+JT+8NXoUGP KufM6ZcFHBbQvdmVAxY7mjuUbudprPcRp9M9RJ9kH47G0JQDMbHnnW8UtTbe6/IF5k7u OUzA== X-Gm-Message-State: AOAM530QNF3htaH+Adhhx9A3U3FDWzUHhS1e3LEDihJZpVg9cqcWWjsj dhtyUK5CglKBgF6RBWTizBt7iDhM+ak= X-Google-Smtp-Source: ABdhPJxBV4JT0opAWZ4CJtwo7bnLJfHkYNFp9w8uB10kS0eFBjlfr6TrmXDnLAJ/asPXarqXHy5U3w== X-Received: by 2002:a05:6602:2f11:: with SMTP id q17mr17853187iow.29.1635302617610; Tue, 26 Oct 2021 19:43:37 -0700 (PDT) Received: from localhost.localdomain (mobile-130-126-255-38.near.illinois.edu. [130.126.255.38]) by smtp.googlemail.com with ESMTPSA id l6sm12215373ilt.31.2021.10.26.19.43.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 26 Oct 2021 19:43:37 -0700 (PDT) To: libc-alpha@sourceware.org Subject: [PATCH v1 4/6] x86_64: Add sse2 optimized __memcmpeq in memcmp-sse2.S Date: Tue, 26 Oct 2021 21:43:21 -0500 Message-Id: <20211027024323.1199441-4-goldstein.w.n@gmail.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211027024323.1199441-1-goldstein.w.n@gmail.com> References: <20211027024323.1199441-1-goldstein.w.n@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-12.0 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Noah Goldstein via Libc-alpha From: Noah Goldstein Reply-To: Noah Goldstein Errors-To: libc-alpha-bounces+patchwork=sourceware.org@sourceware.org Sender: "Libc-alpha" No bug. This commit does not modify any of the memcmp implementation. It just adds __memcmpeq ifdefs to skip obvious cases where computing the proper 1/-1 required by memcmp is not needed. Reviewed-by: H.J. Lu --- sysdeps/x86_64/memcmp.S | 55 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 51 insertions(+), 4 deletions(-) diff --git a/sysdeps/x86_64/memcmp.S b/sysdeps/x86_64/memcmp.S index b53f2c0866..c245383963 100644 --- a/sysdeps/x86_64/memcmp.S +++ b/sysdeps/x86_64/memcmp.S @@ -49,34 +49,63 @@ L(s2b): movzwl (%rdi), %eax movzwl (%rdi, %rsi), %edx subq $2, %r10 +#ifdef USE_AS_MEMCMPEQ + je L(finz1) +#else je L(fin2_7) +#endif addq $2, %rdi cmpl %edx, %eax +#ifdef USE_AS_MEMCMPEQ + jnz L(neq_early) +#else jnz L(fin2_7) +#endif L(s4b): testq $4, %r10 jz L(s8b) movl (%rdi), %eax movl (%rdi, %rsi), %edx subq $4, %r10 +#ifdef USE_AS_MEMCMPEQ + je L(finz1) +#else je L(fin2_7) +#endif addq $4, %rdi cmpl %edx, %eax +#ifdef USE_AS_MEMCMPEQ + jnz L(neq_early) +#else jnz L(fin2_7) +#endif L(s8b): testq $8, %r10 jz L(s16b) movq (%rdi), %rax movq (%rdi, %rsi), %rdx subq $8, %r10 +#ifdef USE_AS_MEMCMPEQ + je L(sub_return8) +#else je L(fin2_7) +#endif addq $8, %rdi cmpq %rdx, %rax +#ifdef USE_AS_MEMCMPEQ + jnz L(neq_early) +#else jnz L(fin2_7) +#endif L(s16b): movdqu (%rdi), %xmm1 movdqu (%rdi, %rsi), %xmm0 pcmpeqb %xmm0, %xmm1 +#ifdef USE_AS_MEMCMPEQ + pmovmskb %xmm1, %eax + subl $0xffff, %eax + ret +#else pmovmskb %xmm1, %edx xorl %eax, %eax subl $0xffff, %edx @@ -86,7 +115,7 @@ L(s16b): movzbl (%rcx), %eax movzbl (%rsi, %rcx), %edx jmp L(finz1) - +#endif .p2align 4,, 4 L(finr1b): movzbl (%rdi), %eax @@ -95,7 +124,15 @@ L(finz1): subl %edx, %eax L(exit): ret - +#ifdef USE_AS_MEMCMPEQ + .p2align 4,, 4 +L(sub_return8): + subq %rdx, %rax + movl %eax, %edx + shrq $32, %rax + orl %edx, %eax + ret +#else .p2align 4,, 4 L(fin2_7): cmpq %rdx, %rax @@ -111,12 +148,17 @@ L(fin2_7): movzbl %dl, %edx subl %edx, %eax ret - +#endif .p2align 4,, 4 L(finz): xorl %eax, %eax ret - +#ifdef USE_AS_MEMCMPEQ + .p2align 4,, 4 +L(neq_early): + movl $1, %eax + ret +#endif /* For blocks bigger than 32 bytes 1. Advance one of the addr pointer to be 16B aligned. 2. Treat the case of both addr pointers aligned to 16B @@ -246,11 +288,16 @@ L(mt16): .p2align 4,, 4 L(neq): +#ifdef USE_AS_MEMCMPEQ + movl $1, %eax + ret +#else bsfl %edx, %ecx movzbl (%rdi, %rcx), %eax addq %rdi, %rsi movzbl (%rsi,%rcx), %edx jmp L(finz1) +#endif .p2align 4,, 4 L(ATR):